• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 835
  • 327
  • 224
  • 103
  • 40
  • 24
  • 22
  • 17
  • 16
  • 15
  • 15
  • 12
  • 8
  • 8
  • 7
  • Tagged with
  • 1951
  • 416
  • 203
  • 172
  • 159
  • 158
  • 151
  • 147
  • 138
  • 113
  • 113
  • 106
  • 104
  • 103
  • 98
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Laboruntersuchungen zum Gefrierprozess in polaren stratosphaerischen

Kraemer, Benedikt, Heidelberg 10 December 1998 (has links)
No description available.
92

Bus Topology Exploration and Memory Allocation for Heterogeneous Systems

Wu, Jhih-Yong 02 August 2007 (has links)
Since semiconductor process is constantly being improved, the complexity of system-on-chip is rising daily and we can place more and more elements on the same chip area. The system designers have been searching new methodology that can handle the complex systems and the environment which can quickly simulate the system-on-chip. It is brought forward that is raising the level of abstraction, as the design methodology of Electronic-System-Level (ESL). But system designers still need to decide the system architecture (the bus and PE connection status), and judge if the system could meet the performance and cost constraints by simulation results. For the very complex system, system designers will cost more and more time owning to the growth of design space to get the best system architecture. In this thesis, we propose a synthesis method to support automatic ESL design and help system designers to decide system architecture from large design space in short time. The method uses fast estimation method to estimate bus topology and memory allocation that affect the processing-elements¡¦ communication. By this method, we can find better system architecture which meets all constraints with the same amount of processing-elements.
93

OpenCL Framework for a CPU, GPU, and FPGA Platform

Ahmed, Taneem 01 December 2011 (has links)
With the availability of multi-core processors, high capacity FPGAs, and GPUs, a heterogeneous platform with tremendous raw computing capacity can be constructed consisting of any number of these computing elements. However, one of the major challenges for constructing such a platform is the lack of a standardized framework under which an application’s computational task and data can be easily and effectively managed amongst the computing elements. In this thesis work such a framework is developed based on OpenCL (Open Computing Language). An OpenCL API and run time framework, called O4F, was implemented to incorporate FPGAs in a platform with CPUs and GPUs under the OpenCL framework. O4F help explore the possibility of using OpenCL as the framework to incorporate FPGAs with CPUs and GPUs. This thesis details the findings of this first-generation implementation and provides recommendations for future work.
94

OpenCL Framework for a CPU, GPU, and FPGA Platform

Ahmed, Taneem 01 December 2011 (has links)
With the availability of multi-core processors, high capacity FPGAs, and GPUs, a heterogeneous platform with tremendous raw computing capacity can be constructed consisting of any number of these computing elements. However, one of the major challenges for constructing such a platform is the lack of a standardized framework under which an application’s computational task and data can be easily and effectively managed amongst the computing elements. In this thesis work such a framework is developed based on OpenCL (Open Computing Language). An OpenCL API and run time framework, called O4F, was implemented to incorporate FPGAs in a platform with CPUs and GPUs under the OpenCL framework. O4F help explore the possibility of using OpenCL as the framework to incorporate FPGAs with CPUs and GPUs. This thesis details the findings of this first-generation implementation and provides recommendations for future work.
95

How Physical and Chemical Properties Change Ice Nucleation Efficiency of Soot and Polyaromatic Hydrocarbon Particles

Suter, Katie Ann 2011 August 1900 (has links)
Heterogeneous freezing processes in which atmospheric aerosols act as ice nuclei (IN) cause nucleation of ice crystals in the atmosphere. Heterogeneous nucleation can occur through several freezing mechanisms, including contact and immersion freezing. The mechanism by which this freezing occurs depends on the ambient conditions and composition of the IN. Aerosol properties change through chemical aging and reactions with atmospheric oxidants such as ozone. We have conducted a series of laboratory experiments using an optical microscope apparatus equipped with a cooling stage to determine how chemical oxidation changes the ability of atmospheric aerosols to act as IN. Freezing temperatures are reported for aerosols composed of fresh and oxidized soot and polyaromatic hydrocarbons (PAHs) including anthracene, phenanthrene, and pyrene. Our results show that oxidized soot particles initiate ice freezing events at significantly warmer temperatures than fresh soot, 3 °C on average. All oxidized PAHs studied had significantly warmer freezing temperatures than fresh samples. The chemical changes presumably causing the improved ice nucleation efficiency were observed using Fourier Transform Infrared Spectroscopy with Horizontal Attenuated Total Reflectance (FTIR-HATR). The addition of C=O bonds at the surface of the soot and PAHs led to changes in freezing temperatures. Finally, we have used classical nucleation theory to derive heterogeneous nucleation rates for the IN compositions in this research. The overall efficiency of the IN can be compared in order of least efficient to most efficient: fresh phenanthrene, fresh anthracene, fresh soot, oxidized phenanthrene, fresh pyrene, oxidized anthracene, oxidized soot, and oxidized pyrene. Overall oxidation of aerosols increases their ability to act as IN. Our results suggest that oxidation processes facilitate freezing at warmer temperatures at a broader range of conditions on the atmosphere.
96

Immobilized metallodithiolate ligand supports for construction of bioinorganic model complexes

Green, Kayla Nalynn 15 May 2009 (has links)
The A-cluster active site in acetyl coA synthase exploits a Ni(CGC)2- metallopeptide as a bidentate ligand to chelate the catalytically active square-planar nickel center used to produce acetyl coA. As Nature utilizes polypeptides to isolate and stabilize the active sites, we have set out to immobilize biomimetic complexes to polyethylene-glycol (PEG) rich polystyrene polymer beads (TentaGel). The PEG rich resin-beads serve to imitate the peptidic superstructure of enzyme active sites as well as to protect the resin-bound models from O2 decomposition. As a model of the NiN2S2 ligand observed in the A-cluster of acetyl coA synthase, the CGC tripeptide was constructed on resins using Merrifield solid phase peptide synthesis and then metallated with NiII to produce bright orange beads. Derivatization with M(CO)x (M = Rh, W) provided qualitative identification of Ο-Ni(CGC)M(CO)x n- via ATR-FTIR. Additionally, Neutron Activation Analysis (NAA) and UV-vis studies have determined the concentration of Ni and CGC, and qualitatively identify Ο-Ni(CGC)2-. Furthermore, infrared studies and NAA experiments have been used to identify and quantify Ο- Ni(CGC)Rh(CO)2 1-. The S-based reactivity of Ni(ema)2-, a good model of Ni(CGC)2-, toward oxygenation and alkylation has been pursued and compared to neutral NiN2S2 complexes. The spectroscopic, electrochemical and structural effects of these modifications will be discussed and supported using DFT computations and electrostatic potential maps of the resulting Ni(ema)*O2 2- and Ni(ema)*(CH2)3 complexes. Having firmly established the synthesis, characterization and reactivity of NiN2S2 2- systems in solution and resin-bound, CuIIN2S2 analogues were explored. The synthesis and identification of solution complexes, Cu(ema)2-, Cu(emi)2-, and Cu(CGC)2- via UV-Vis, EPR, and –ESI-MS will be discussed in addition to their S-based reactivity with Rh(CO)2 + . Furthermore, the resin-bound Cu(CGC)2- complex has been produced and characterized by EPR and its Rh(CO)2 adduct identified by ATR-FTIR and compared to the analogous NiN2S2 2- systems. As the active site of [FeFe] Hydrogenase utilizes a unique peptide-bound propane dithiolate bridge to support the FeFe organometallic unit, [FeFe]Hydrogenase models have been covalently anchored to the resin-beads via similar carboxylic acid functionalities. The characterization (ATR-FTIR, EPR, Neutron Activation Analysis), stability and reactivity of the immobilized models complexes are discussed as well as work toward establishing the microenvironment of resin-bound complexes.
97

Stable and scalable congestion control for high-speed heterogeneous networks

Zhang, Yueping 10 October 2008 (has links)
For any congestion control mechanisms, the most fundamental design objectives are stability and scalability. However, achieving both properties are very challenging in such a heterogeneous environment as the Internet. From the end-users' perspective, heterogeneity is due to the fact that different flows have different routing paths and therefore different communication delays, which can significantly affect stability of the entire system. In this work, we successfully address this problem by first proving a sufficient and necessary condition for a system to be stable under arbitrary delay. Utilizing this result, we design a series of practical congestion control protocols (MKC and JetMax) that achieve stability regardless of delay as well as many additional appealing properties. From the routers' perspective, the system is heterogeneous because the incoming traffic is a mixture of short- and long-lived, TCP and non-TCP flows. This imposes a severe challenge on traditional buffer sizing mechanisms, which are derived using the simplistic model of a single or multiple synchronized long-lived TCP flows. To overcome this problem, we take a control-theoretic approach and design a new intelligent buffer sizing scheme called Adaptive Buffer Sizing (ABS), which based on the current incoming traffic, dynamically sets the optimal buffer size under the target performance constraints. Our extensive simulation results demonstrate that ABS exhibits quick responses to changes of traffic load, scalability to a large number of incoming flows, and robustness to generic Internet traffic.
98

Time Scheduling Study in Heterogeneous Sensor Networks

Lin, Min-rui 04 February 2009 (has links)
Due to hierarchical sensor networks is capable to elimination extra sensing information and reduce extra communication load, it is remarkably important to increase scalability of network and prolong lifetime of network. In the paper, we focus on relay nodes of two-layered heterogeneous sensor networks. When relay nodes transmit data without scheduling, the collision probability must increase. It will cost too much for energy to re-transmission data and listening channel. To avoid extra energy consumption, we build a grid network according to RSS, and naming IP for every relay node on the grid network. Then, all leaf nodes join a certain cluster and naming IP according to RSS. Through the exclusive IP it schedule mission for TDMA. Only when relay node comes in specific slot, it wakes up to transmit or receive data, the remaining time to sleep and save power consumption. Besides, in order to balance energy consumption of backbone or non-backbone relay nodes and prolong lifetime of network, we proposal three routing protocol (DTS¡BREARBS¡BVIPOS). According to simulation results, VIPOS is the longest lifetime above them.
99

Coordinated power management in heterogeneous processors

Paul, Indrani 08 June 2015 (has links)
Coordinated Power Management in Heterogeneous Processors Indrani Paul 164 pages Directed by Dr. Sudhakar Yalamanchili With the end of Dennard scaling, the scaling of device feature size by itself no longer guarantees sustaining the performance improvement predicted by Moore’s Law. As industry moves to increasingly small feature sizes, performance scaling will become dominated by the physics of the computing environment and in particular by the transient behavior of interactions between power delivery, power management and thermal fields. Consequently, performance scaling must be improved by managing interactions between physical properties, which we refer to as processor physics, and system level performance metrics, thereby improving the overall efficiency of the system. The industry shift towards heterogeneous computing is in large part motivated by energy efficiency. While such tightly coupled systems benefit from reduced latency and improved performance, they also give rise to new management challenges due to phenomena such as physical asymmetry in thermal and power signatures between the diverse elements and functional asymmetry in performance. Power-performance tradeoffs in heterogeneous processors are determined by coupled behaviors between major components due to the i) on-die integration, ii) programming model and the iii) processor physics. Towards this end, this thesis demonstrates the needs for coordinated management of functional and physical resources of a heterogeneous system across all major compute and memory elements. It shows that the interactions among performance, power delivery and different types of coupling phenomena are not an artifact of an architecture instance, but is fundamental to the operation of many core and heterogeneous architectures. Managing such coupling effects is a central focus of this dissertation. This awareness has the potential to exert significant influence over the design of future power and performance management algorithms. The high-level contributions of this thesis are i) in-depth examination of characteristics and performance demands of emerging applications using hardware measurements and analysis from state-of-the-art heterogeneous processors and high-performance GPUs, ii) analysis of the effects of processor physics such as power and thermals on system level performance, iii) identification of a key set of run-time metrics that can be used to manage these effects, and iv) development and detailed evaluation of online coordinated power management techniques to optimize system level global metrics in heterogeneous CPU-GPU-memory processors.
100

Interference management in heterogeneous cellular networks

Xia, Ping 25 February 2013 (has links)
Heterogeneous cellular networks (HCNs) – comprising traditional macro base stations (BSs) and heterogeneous infrastructure such as microcells, picocells, femtocells and distributed antennas – are fast becoming a cost-effective and essential way of handling explosive wireless data traffic demands. Up until now, little basic research has been done on the fundamentals of managing so much infrastructure – much of it unplanned – together with the carefully planned macro-cellular network. This dissertation addresses the key technical challenges of inter-cell interference management in this new network paradigm. This dissertation first studies uplink femtocell access control in uncoordinated two-tier networks, i.e. where the femtocells cannot coordinate with macrocells. Closed access allows registered home users to monopolize their own femtocell and its backhaul connection, but also results in severe interference between femtocells and nearby unregistered macro users. Open access reduces such interference by handing over such users, at the expense of femtocell resource sharing. In the first analytical work on this topic, we studied the best femtocell access technique from the perspectives of both network operators and femtocell owners, and show that it is strongly contingent on parameters such as multiple access schemes (i.e. orthogonal vs. non- orthogonal) and cellular user density (in TDMA/OFDMA). To study coordinated algorithms whose success depends heavily on the rate and delay (vs. user mobility) of inter-cell overhead sharing, this dissertation develops various models of overhead signaling in general HCNs and derives the overhead quality contour – the achievable set of overhead packet rate and delay – under general assumptions on overhead arrivals and different overhead signaling methods (backhaul and/or wireless). The overhead quality contour is further simplified for two widely used models of overhead arrivals: Poisson and deterministic. Based on the overhead quality contour that is applicable to generic coordinated techniques, this dissertation develops a novel analytical framework to evaluate downlink coordinated multi-point (CoMP) schemes in HCNs. Combined with the signal-to-interference-plus-noise-ratio (SINR) characterization, this framework can be used for a class of CoMP schemes without user data sharing. As an example, we apply it to downlink CoMP inter-cell interference cancellation (ICIC), after deriving SINR results for it using the spatial Poisson Point Process (PPP) to capture the uncertainty in base station locations. / text

Page generated in 0.0525 seconds