• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 541
  • 76
  • 18
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • 3
  • 2
  • Tagged with
  • 781
  • 781
  • 219
  • 198
  • 144
  • 126
  • 107
  • 106
  • 89
  • 87
  • 75
  • 72
  • 71
  • 66
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Optimization Studies in Graphene Electronics

Chari, Tarun January 2016 (has links)
The ever-growing demand for higher bandwidth broadband communication has driven transistor operation to higher and higher frequencies. However, achieving cut-o frequencies in the terahertz regime have been unsuccessful with the current state-of-the-art transistors exhibiting no better than 800 GHz. While the high-frequency transistor eld is dominated by III-V semiconductors, it has been proposed that graphene may be a competitive material. Graphene exhibits electron and hole mobilities orders of magnitude larger than conventional semiconductors and has an atomically thin form factor. Despite these benets, high-frequency graphene transis tors have yet to realize high-frequency characteristics better than III-V's. This thesis expands on the current limitations of graphene transistors in terms of improved fabrication techniques (to achieve higher carrier mobilities and lower contact resistances) and fundamental, band structure limitations (like quantum capacitance and the zero energy band gap). First, graphene, fully encapsulated in hexagonal boron-nitride crystals, transistors are fabricated with self-aligned source and drain contacts with sub-100 nm gate lengths. The encapsulation technique shields the graphene from the external environment so that graphene retains its intrinsic high mobility characteristic. In this short-channel regime, transport is determined to be ballistic with an injection velocity close to the Fermi velocity of graphene. However, the transconductance and output conductance are only 0.6 mS/mm and 0.3 mS/mm, respectively. This lack-luster performance is due to a relatively thick (3.5 nm) eective oxide thickness but also due to the eects of quantum capacitance which diminishes the total gate capacitance by up to 60%. Furthermore, the output conductance is increased due to the onset of hole conduction which leads to a second linear regime in the I-V characteristic. This is a direct consequence of graphene's zero energy band gap electronic structure. Finally, the source and drain contact resistances are large, which leads to poorer output current, transconductance and output conductance. Second, improvement to the contact resistance is explored by means of using graphite as the contact metal to graphene. Since graphite is atomically smooth, a pristine graphite-graphene interface can be formed without grain asperities found in conventional metals. Graphite is also lattice matched to graphene and exhibits the same 60 symmetry. Consequently, it is discovered that the graphite-graphene contact resistance exhibits a 60 periodicity, with respect to crystal orientation. When the two lattices align, a contact resistivity under 10 Wmm² is observed. Furthermore, contact resistivity minima are observed at two of the commensurate angles of twisted bilayer graphene. Though graphene transistor performance is band structure limited, it may still be possible to achieve competitive high-frequency operation by use of h-BN encapsulation and graphite contacts.
252

The Effects of Manipulation of Virtual Objects in a Game-like Environment as a Supplement to a Teaching Lesson in the Context of Physics Concepts

Chantes, Pantiphar January 2017 (has links)
Many scientific domains deal with abstract and multidimensional phenomena, and students often struggle to comprehend theoretical and complex abstractions and apply scientific concepts to real life contexts (Anderson & Barnett, 2013). One of these scientific domains that impose theoretical and complex abstractions is physics. The way that physics has traditionally been taught in school is through learning mathematical formulas and equations (Price, 2008). Many researchers proposed several ways to teach physics effectively. There are several virtual reality applications and computer games that were designed and utilized in the area of science education. In the case of physics education, many studies yielded positive results when using computer games to teach abstract concepts to students (Maxmen, 2010; Price, 2008; Squire et al., 2004). Furthermore, both physical and virtual manipulative tools were shown to be effective and essential in physics learning. This study examined the effects of manipulation of virtual objects in a game-like environment when supplemented with a descriptive or a narrative lesson in the context of physics concepts related to force, distance, and conservation of energy. In particular, the study examined learners’ performance on a test of physics knowledge related to the study when encountered with two factors that influence learning: lesson type and type of manipulation. The study drew on the research done on using virtual manipulatives in education and theoretical support from constructivist theories of learning implying that learners form their own knowledge through meaningful interactions with the world, and that prior knowledge greatly influences the construction of new knowledge in individual learners (Barbour et al., 2009; Bruner, 1966). From the study’s results, it seems that providing a textual pre-lesson is important for low-prior knowledge learners when it comes to learning physics concepts. Moreover, having engaged in a manipulation task also contributed to participants’ learning gain (in both low-prior knowledge and high-prior knowledge groups) as measured by the post-assessments used in this study. Moreover, the results from this study help inform educational game designers who incorporate manipulatives about the role of providing pre-lessons that tie to concepts targeted by the manipulation activity, and how different kinds of manipulation in a game-like environment affect learning outcomes. The findings suggest that the role of these two factors combined requires further research.
253

Effect on Superficial Variability of Examples on Learning Applied Probability

Jin, Tiantian January 2018 (has links)
Learning through examples is a central and widely used instructional device for teaching mathematically-based subjects such as statistical probability. However, the applications of the superficial variability of examples remain controversial. This dissertation investigates how the superficial variability of multiple examples influences students' learning and transfer of probability problem-solving. Moreover, the author discovers whether content difficulty affects the influence of examples' superficial variability. Three conditions were developed and compared: consistent-surface condition (CS), varied-surface-within-rule condition (VSWR), and varied-surface-between-rule condition (VSBR). For the purpose of exploration and methodology improvement for the dissertation study, two pilot studies were conducted. However, conflicting results were shown in those two studies. In the first pilot study, students in CS condition performed the worst. In the second pilot study, students in VSBR condition performed the worst. These conflicting results encouraged the author, even more, to conduct the dissertation study with a larger sample size and improved methodology. In this dissertation study, the author found that students' performance on the posttests in VSBR was significantly worse than in the other two conditions, which was consistent with the second pilot study, and that their performance in CS and VSWR condition was not different. Contrary to expectation, the strength of the pattern of the effect of the superficial variability of examples did not vary between the easy and difficult types of problems. Moreover, the pattern was the same when the difficulty variable was not included. These results suggested that examples' superficial consistency between different problem types promotes more effective learning than superficial variation between different problem types. The consistency can be one single cover story used multiple times for each type of problem or the same battery of varied cover stories used repeatedly for different types of problem. Moreover, the pattern of the influence of superficial variability of examples is robust among types of the problem at varying difficulty levels.
254

Goal Introduction in Online Discussion Forums: An Activity Systems Analysis

Dashew, Brian Leigh January 2018 (has links)
Self-direction is the process by which individuals collaborate in the construction of meaningful learning objectives and use internal and external controls to meet those objectives. In professional contexts, self-direction is seen as an increasingly important skill for engagement in complex organizations and industries. Modern innovations in program development for adult learners, therefore, should address learners’ needs for self-motivation, self-monitoring, and self-management. Social learning contexts—such as online class discussion forums—have emerged as potentially democratic spaces in online learning. Yet evaluation methods for assessing online discussion have not considered the ways in which student-introduced goals influence how quality is operationalized and studied. This research attempted to understand if, when, and how adult learners leverage online course discussions as a space to introduce and moderate their own learning and professional goals. The study used activity systems analysis as a framework for assessing self-direction within a complex social learning environment. A sample drawn from three sections of an online Research Design course was observed, surveyed, and interviewed to develop a visual map and narrative description of their perceptions of a discussion activity system. A cross-case analysis of these maps was used to define five systemic tensions that prevented students from aligning their goals with the instructor-designed activities. When faced with these tensions, students either subjugated their own goals to an instructor’s explicit goals, or else introduced one of eight mediating behaviors associated with self-directed learning. The study yielded five emergent hypotheses that require further investigation: (1) that self-directed learning is not inherent, even among Millennial learners, (2) that self-directed learning is collaborative, (3) that goals for interaction in social learning environments are not universal, (4) that goals must be negotiated, explicit, and activity bound, and (5) that self-directed learning may be not be an observable phenomenon.
255

Modeling and Analysis of Complex Systems Design Processes

Kushal A. Moolchandani (5930063) 21 December 2018 (has links)
<div>This work proposes a framework for modeling an organization as a network of autonomous design agents who collectively work on the design of a complex system. The research objective is to identify a design process policy which best suits the current organization evaluated on the basis of the value that it provides to the organization. Consequently, the research question is, "How does an organization comprised of autonomous design teams select a design process policy which provides the highest value?" The proposed framework models design teams as agents who adapt their behavior using information on design variables available from other teams and the incentives in form of rewards from a system-level designer.</div><div><br></div><div><div>While extant literature on complex systems design has proposed several models of design processes, there is still a need for models that are versatile enough to represent different types of purposes and scopes of hierarchical levels. Further, models still do not account for the social, cultural, and political aspects of design. Due to the invariably long development times of a complex system, the environment's dynamics such as changing requirements would require all design teams to update their models and decisions during the process. They have to do this while accounting for the decisions of the other teams. The system-level designer, on the other hand, has to ensure that the design teams' decisions are in the best interest of the organization, which is to maximize value. The work proposed in this research addresses these issues by taking a bottom-up approach to modeling this complex, dynamic and uncertain design environment, where organizational-level outcomes are modeled as a result of decisions of individual teams who respond to local incentives.</div></div><div><br></div><div><div>The system-level designer and the subsystem design teams, are modeled to interact with other agents with whom they share design variables. The subsystem teams first solve their local design problems, and then exchange the results of these problems with other teams. The proposed modeling is versatile to represent human behaviors such as their adding of margins to design variables during the process of information exchange. In each interaction, the receiving teams make decisions to update their local variable values with the one newly available or to continue to use their own value. They make these decisions on the basis of which decision leads to the highest utility measured by a predened value function. Thus, each team acts in its self-interest and maximizes its local value. In case they do not arrive at a common design, the system-level designer attempts to assign rewards which incentivize the teams to update designs such that they are compatible with the other teams. In such cases, the teams would be willing to forgo a portion of their utility obtained from the design outcome if they are compensated for this loss by the system-level designer. Therefore, the task of a system-level designer is to solve a compatibility problem which trades off between different subsystems outcomes and arrives as the final design while maximizing the organization's value.</div></div><div><br></div><div><div>The framework is developed and then described through a series of increasingly complex design cases using a synthetic optimization problem. Following this, an aircraft design problem serves as a demonstration of application of this framework. The results obtained from both the synthetic and the demonstration problem then inform the discussion of various characteristics of a complex systems design process.</div></div>
256

Say, Do, Make?:user involvement in information systems design

Tokkonen, H. (Helena) 05 March 2019 (has links)
Abstract User involvement in information systems design has recently gained interest in the media. Numerous systems have been digitalized during product development to help people’s everyday lives. But are information systems designed to meet users’ needs or support users’ goals? The goal of this research was to understand how user involvement is perceived in information systems design and how users are involved. Is the basis of user involvement what a user Says or what a user Does, or is a user actively participating in the whole design process? The informants of the present study entailed different design projects that were investigated with a qualitative method by interviewing 20 designers of selected design cases. At first an a priori model of user involvement in information systems design was created based on an analysis of extant literature. The model was used in the analysis of information systems design cases. Based on the empirical data a revised, a posteriori model, UICD model was developed. The UICD model provides on overall picture of user involvement in information systems design. UICD model can aid designers to understand user involvement comprehesively: what users Say, what users Do and what users Make in design process. Compared to the a priori model, UICD model includes the impact of other key stakeholders in information systems design process. / Tiivistelmä Käyttäjien osallistuttaminen tietojärjestelmien suunnitteluun on herättänyt julkista keskustelua. Monia yhteiskunnallisia ja yksityisiä palveluja on digitalisoitu sekä tuotteiden yhteyteen on suunniteltu järjestelmiä helpottamaan asiakkaiden toimintaa. Mutta ovatko suunnittelut ratkaisut käyttäjän tavoitteiden mukaisia ja vastaavatko ne käyttäjien tarpeisiin? Tämän tutkimuksen tavoitteena oli ymmärtää kuinka käyttäjien osallistuttaminen käsitetään informaatiojärjestelmien suunnittelussa ja miten käyttäjiä on osallistettu. Perustuuko käyttäjien osallistuttaminen tietoon siitä mitä hän sanoo tai mitä hän tekee vai osallistuuko hän koko suunnitteluprosessin ajan tulevan ratkaisun kehittämiseen? Tutkimuksen kohteina oli 20 erilaista projektia, joihin syvennyttiin laadullisella tutkimuksella haastattelemalla projekteissa toimineita suunnittelijoita. Tässä tutkimuksessa laadittiin ensin kirjallisuuskatsaukseen pohjalta malli käyttäjien osallistuttamisesta informaatiojärjestelmien suunnittelutyöhön. Mallia käytettiin empiirisesti kerätyn tiedon analyysin pohjana. Tämän jälkeen mallia muokattiin siten, että se selittää tutkimuksen havainnot. Näin saatu UICD malli luo kokonaiskuvan käyttäjälähtöisyyden ulottuvuuksista informaatiojärjestelmien suunnittelutyössä. UICD malli auttaa suunnittelijoita ymmärtämään käyttäjien osallistuttamisen kokonaisvaltaisesti: mitä käyttäjät sanovat, mitä käyttäjät tekevät ja miten käyttäjät osallistuvat informaatiojärjestelmien suunnitteluun. UICD malli laajentaa aiemman tutkimuksen näkemystä muun muassa keskeisten sidosryhmien vaikutuksesta informaatiojärjestelmien suunnitteluun.
257

Novel optical access network architectures and transmission system technologies for optical fiber communications. / CUHK electronic theses & dissertations collection

January 2006 (has links)
Currently, optical communications plays an important role in the transmission aspect of backbone fiber networks. However, there still remain two challenges in this field: one is the bottleneck between high-capacity local area networks (LANs) and the backbone network, where the answer is the broadband optical fiber access networks; the other is the bottleneck of low-speed electrical signal processing in high capacity optical networks, where one possible solution is all-optical nonlinear signal processing. This thesis will cover both of the two topics. In the first topic, the emphasis will be put on the novel optical access network infrastructure design to improve the access network reliability and functionality as well as the reduction of system complexity. In the second topic, the focus is how to utilize the newly-emerging photonic devices or newly-designed configurations to improve the performance of current optical signal processing subsystems for applications in lightwave transmission systems. / In summary, this thesis introduces a series of novel optical access network architecture designs and transmission system technologies for optical fiber communications and discusses their feasibilities in practice from the research perspective. We hope that these proposed technologies can contribute to the further developments in this field. / In the area of broadband optical fiber access networks, two aspects are considered: survivability and monitoring function. For the first part, several new network protection schemes among various access network topologies (i.e. tree and ring) are proposed and experimentally demonstrated, which could reduce the access network cost and simplifying the operation of the access network. For the second part, an interesting in-service fault surveillance scheme in the current TDM-PON is proposed via analyzing the composite radio-frequency (RF) spectrum of the common supervisory channel at the central office (CO). Experiment proves its effectiveness with negligible influence on the signal channels. In addition, a system demonstration of the WDM-based optical broadband access network with automatic optical protection function is presented, showing the potential of WDM technologies in the broadband optical access networks. / In the area of nonlinear optical signal processing, the technology innovation is in two areas: new architecture design and the new photonic devices. For the issue of architecture design, the focus is on the Nonlinear Optical Loop Mirror (NOLM) structure. A new polarization-independent OTDM demultiplexing scheme is proposed and demonstrated by incorporating a polarization-diversity loop into a conventional NOLM, which offers stable operation using the conventional components without sacrificing the operation speed or structural simplicity. In another study the design of a novel OFSK transmitter based on phase-modulator-embedded NOLM is conceived and implemented, which features data-rate transparent, continuous tuning of the wavelength spacing and stable operation. For the aspect of new photonic devices, this thesis focuses on the applications of photonic crystal fibers (PCF). In one work, a relatively short-length of dispersion-flattened high-nonlinearity PCF (gamma=11.2 (W-km)-1, D&lt;-1 ps/nm/km 1500-1600 nm, S&lt;1 x 10-3 ps/km/nm2) is integrated into a dispersion-imbalanced loop mirror (DILM) to form a nonlinear intensity discriminator and its application in the nonlinear suppression of the incoherent interferometric crosstalk has been successfully demonstrated. The special characteristics of the PCF ensure a broadband signal quality improvement and make the DILM more compact and stable. In the other work, the small birefringence of this PCF helps to simply achieve the polarization-insensitive wide-band wavelength converter based on four-wave-mixing in PCF. / Wang Zhaoxin. / "September 2006." / Advisers: Chinlon Lin; Chun-Kit Chan. / Source: Dissertation Abstracts International, Volume: 68-03, Section: B, page: 1846. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 93-101). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
258

Scalable Equivalence Checking for Behavioral Synthesis

Yang, Zhenkun 05 August 2015 (has links)
Behavioral synthesis is the process of compiling an Electronic System Level (ESL) design to a register-transfer level (RTL) implementation. ESL specifications define the design functionality at a high level of abstraction (e.g., with C/C++ or SystemC), and thus provide a promising approach to address the exacting demands to develop feature-rich, optimized, and complex hardware systems within aggressive time-to-market schedules. Behavioral synthesis entails application of complex and error-prone transformations during the compilation process. Therefore, the adoption of behavioral synthesis highly depends on our ability to ensure that the synthesized RTL conforms to the ESL description. This dissertation provides an end-to-end scalable equivalence checking support for behavioral synthesis. The major challenge of this research is to bridge the huge semantic gap between the ESL and RTL descriptions, which makes the direct comparison of designs in ESL and RTL difficult. Moreover, a large number and a wide variety of aggressive transformations from front-end to back-end require an end-to-end scalable checking framework. This dissertation provides an end-to-end scalable equivalence checking support for behavioral synthesis. The major challenge of this research is to bridge the huge semantic gap between the ESL and RTL descriptions, which makes the direct comparison of designs in ESL and RTL difficult. Moreover, a large number and a wide variety of aggressive transformations from front-end to back-end require an end-to-end scalable checking framework. A behavioral synthesis flow can be divided into three major phases, including 1) front-end : compiler transformations, 2) scheduling: assigning each operation a clock cycle and satisfying the user-specified constraints, and 3) back-end : local optimizations and RTL generation. In our end-to-end and incremental equivalence checking framework, we check each of the three phases one by one. Firstly, we check the front-end that consists of a sequence of compiler transformations by decomposing it into a series of checks, one for each transformation applied. We symbolically explore paths in the input and output programs of each transformation, and check whether the input and output programs have the same observable behavior under the same path condition. Secondly, we validate the scheduling transformation by checking the preservation of control and data dependencies, and the preservation of I/O timing in the user-specified scheduling mode. Thirdly, we symbolically simulate the scheduled design and the generated RTL cycle by cycle, and check the equivalence of each mapped variables. We also develop several key optimizations to make our back-end checker scale to real industrial-strength designs. In addition to the equivalence checking framework, we also present an approach to detecting deadlocks introduced by parallelization of RTL blocks that are connected by synthesized interfaces with handshaking protocols. To demonstrate the efficiency and scalability of our framework, we evaluated it on transformations applied by a behavioral synthesis tool to designs from the C-based CHStone and SystemC-based S2CBench benchmarks. Based on the evaluation results, our front-end checker can efficiently validate more than 75 percent of the total of 1008 compiler transformations applied to designs from the CHStone benchmark, taking an average time of 1.5 seconds per transformation. Our scheduling checker can validate control-data dependencies and I/O timing of all designs from S2CBench benchmark. Our back-end checker can handle designs with more than 32K lines of synthesized RTL from the CHStone benchmark, which demonstrates the scalability of the checker. Furthermore, our checker found several bugs in a commercial tool, underlining both the importance of formal equivalence checking and the effectiveness of our approach.
259

Certifying Loop Pipelining Transformations in Behavioral Synthesis

Puri, Disha 20 March 2017 (has links)
Due to the rapidly increasing complexity in hardware designs and competitive time to market trends in the industry, there is an inherent need to move designs to a higher level of abstraction. Behavioral Synthesis is the process of automatically compiling such Electronic System Level (ESL) designs written in high-level languages such as C, C++ or SystemC into Register-Transfer Level (RTL) implementation in hardware description languages such as Verilog or VHDL. However, the adoption of this flow is dependent on designers' faith in the correctness of behavioral synthesis tools. Loop pipelining is a critical transformation employed in behavioral synthesis process, and ubiquitous in commercial and academic behavioral synthesis tools. It improves the throughput and reduces the latency of the synthesized hardware. It is complex and error-prone, and a small bug can result in faulty hardware with expensive ramifications. Therefore, it is critical to certify the loop pipelining transformation so that designers can trust the behaviorally synthesized pipelined designs. Certifying a loop pipelining transformation is however, a major research challenge because there is a huge semantic gap between the input sequential design and the output pipelined implementation, making it infeasible to verify their equivalence with automated sequential equivalence checking (SEC) techniques. Complex loop pipelining transformations can be certified by a combination of theorem proving and SEC: (1) creating a certified pipelining algorithm which generates a reference pipeline model by exploiting pipeline generation information from the synthesis flow (e.g. the iteration interval of a generated pipeline) and (2) conduct SEC between the synthesized pipeline and this reference model. However, a key and arguably, the most complex component of this approach is the development of a formal, mechanically verifiable loop pipelining algorithm. We show how to systematically construct such an algorithm, and carry out its verification using the ACL2 theorem prover. We propose a framework of certified pipelining primitives which are essential for designing pipelining algorithms. Using our framework, we build a certified loop pipelining algorithm. We also propose a key invariant in certifying this algorithm, which links sequential loops with their pipelined counterparts. This is unlike other invariants that have been used in proofs of microprocessor pipelines so far. This dissertation provides a framework for creating certified pipelining algorithms utilizing a mechanical theorem prover. Using this framework, we have developed a certified loop pipelining algorithm. This certified algorithm is essential in the overall approach to certify behaviorally synthesized pipelined designs. We demonstrate the scalability and robustness of our algorithm on several ESL designs across various domains.
260

Adoption of an Internet of Things Framework for Distributed Energy Resource Coordination and Control

Slay, Tylor 18 July 2018 (has links)
Increasing penetration of non-dispatchable renewable energy resources and greater peak power demand present growing challenges to Bulk Power System (BPS) reliability and resilience. This research investigates the use of an Internet of Things (IoT) framework for large scale Distributed Energy Resource (DER) aggregation and control to reduce energy imbalance caused by stochastic renewable generation. The aggregator developed for this research is Distributed Energy Resource Aggregation System (DERAS). DERAS comprises two AllJoyn applications written in C++. The first application is the Energy Management System (EMS), which aggregates, emulates, and controls connected DERs. The second application is the Distributed Management System (DMS), which is the interface between AllJoyn and the physical DER. The EMS runs on a cloud-based server with an allocated 8 GB of memory and an 8 thread, 2 GHz processor. Raspberry Pis host the simulated Battery Energy Storage System (BESS) or electric water heater (EWH) DMSs. Five Raspberry Pis were used to simulate 250 DMSs. The EMS used PJM's regulation control signals, RegA and RegD, to determine DERAS performance metrics. PJM is a regional transmission organization (RTO). Their regulation control signals direct power resources to negate load and generation imbalances within the BPS. DERAS's performance was measured by the EMS server resource usage, network data transfer, and signal delay. The regulation capability of aggregated DER was measured using PJM's resource performance assessment criteria. We found the use of an IoT framework for DER aggregation and control to be inadequate in the current network implementation. However, the emulated modes and aggregation response to the regulated control signal demonstrates an excellent opportunity for DER to benefit the BPS.

Page generated in 0.0574 seconds