• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5561
  • 2202
  • 1169
  • 545
  • 376
  • 294
  • 236
  • 152
  • 142
  • 103
  • 100
  • 88
  • 74
  • 74
  • 32
  • Tagged with
  • 13071
  • 1951
  • 1559
  • 1438
  • 1326
  • 1168
  • 1154
  • 1088
  • 923
  • 901
  • 886
  • 732
  • 708
  • 666
  • 646
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

An asymmetric multi-core architecture for efficiently accelerating critical paths in multithreaded programs

Suleman, Muhammad Aater 20 October 2010 (has links)
Extracting high-performance from Chip Multiprocessors (CMPs) requires that the application be parallelized i.e., divided into threads which execute concurrently on multiple cores. To save programmer effort, difficult to parallelize program portions are often left as serial. We show that common serial portions, i.e., non-parallel kernels, critical sections, and limiter stages in a pipeline, become the critical path of the program when the number of cores increases, thereby limiting performance and scalability. We propose that instead of burdening the software programmers with the task of shortening the serial portions, we can accelerate the serial portions using hardware support. To this end, we propose the Asymmetric Chip-Multiprocessor (ACMP) paradigm which provides one (or few) fast core(s) for accelerated execution of the serial portions and multiple slow, small cores for high throughput on the parallel portions. We show a concrete example implementation of the ACMP which consists of one large, high-performance core and many small, power-efficient cores. We develop hardware/software mechanisms to accelerate the execution of serial portions using the ACMP, and further improve the ACMP by proposing mechanisms to tackle common overheads incurred by the ACMP. / text
282

Unknown word sequences in HPSG

Mielens, Jason David 06 October 2014 (has links)
This work consists of an investigation into the properties of unknown words in HPSG, and in particular into the phenomenon of multi-word unknown expressions consisting of multiple unknown words in a sequence. The work presented consists first of a study determining the relative frequency of multi-word unknown expressions, and then a survey of the efficacy of a variety of techniques for handling these expressions. The techniques presented consist of modified versions of techniques from the existing unknown-word prediction literature as well as novel techniques, and they are evaluated with a specific concern for how they fare in the context of sentences with many unknown words and long unknown sequences. / text
283

Numerical Study of Coherent Structures within a legacy LES code and development of a new parallel Frame Work for their computation.

Giammanco, Raimondo R 22 December 2005 (has links)
The understanding of the physics of the Coherent Structures and their interaction with the remaining fluid motions is of paramount interest in Turbulence Research. Indeed, recently had been suggested that separating and understanding the the different physical behavior of Coherent Structures and "uncoherent" background might very well be the key to understand and predict Turbulence. Available understanding of Coherent Structures shows that their size is considerably larger than the turbulent macro-scale, making permissible the application of Large Eddy Simulation to their simulation and study, with the advantage to be able to study their behavior at higher Re and more complex geometry than a Direct Numerical Simulation would normally allow. Original purpose of the present work was therefore the validation of the use of Large Eddy Simulation for the study of Coherent Structures in Shear-Layer and the its application to different flow cases to study the effect of the flow topology on the Coherent Structures nature. However, during the investigation of the presence of Coherent Structures in numerically generated LES flow fields, the aging in house Large Eddy Simulation (LES) code of the Environmental & Applied Fluid Dynamics Department has shown a series of limitations and shortcomings that led to the decision of relegating it to the status of Legacy Code (from now on indicated as VKI LES legacy code and of discontinuing its development. A new natively parallel LES solver has then been developed in the VKI Environmental & Applied Fluid Dynamics Department, where all the shortcomings of the legacy code have been addressed and modern software technologies have been adopted both for the solver and the surrounding infrastructure, delivering a complete framework based exclusively on Free and Open Source Software (FOSS ) to maximize portability and avoid any dependency from commercial products. The new parallel LES solver retains some basic characteristics of the old legacy code to provide continuity with the past (Finite Differences, Staggered Grid arrangement, Multi Domain technique, grid conformity across domains), but improve in almost all the remaining aspects: the flow can now have all the three directions of inhomogeneity, against the only two of the past, the pressure equation can be solved using a three point stencil for improved accuracy, and the viscous terms and convective terms can be computed using the Computer Algebra System Maxima, to derive discretized formulas in an automatic way. For the convective terms, High Resolution Central Schemes have been adapted to the three-dimensional Staggered Grid Arrangement from a collocated bi-dimensional one, and a system of Master-Slave simulations has been developed to run in parallel a Slave simulation (on 1 Processing Element) for generating the inlet data for the Master simulation (n - 1 Processing Elements). The code can perform Automatic Run-Time Load Balancing, Domain Auto-Partitioning, has embedded documentation (doxygen), has a CVS repository (version managing) for ease of use of new and old developers. As part of the new Frame Work, a set of Visual Programs have been provided for IBM Open Data eXplorer (OpenDX), a powerful FOSS Flow visualization and analysis tool, aimed as a replacement for the commercial TecplotTM, and a bug tracking mechanism via Bugzilla and cooperative forum resources (phpBB) for developers and users alike. The new M.i.O.m.a. (MiOma) Solver is ready to be used again for Coherent Structures analysis in the near future.
284

Intelligent Motion Planning for a Multi-Robot System

Johansson, Ronnie January 2001 (has links)
<p>Multi-robot systems of autonomous mobile robots offer many benefits but also many challenges. This work addresses collision avoidance of robots solving continuous problems in known environments. The approach to handling collision avoidance is here to enhance a motion planning method for single-robot systems to account for auxiliary robots. A few assumptions are made to put the focus of the work on path planning, rather than on localization.</p><p>A method, based on exact cell decomposition and extended with a few rules, was developed and its consistency was proven. The method is divided into two steps: path planning, which is off-line, and path monitoring, which is on-line. This work also introduces the notion of<em>path obstacle</em>, an essential tool for this kind of path planning with many robots.</p><p>Furthermore, an implementation was performed on a system of omni-directional robots and tested in simulations and experiments. The implementation practices centralized control, by letting an additional computer handle the motion planning, to relieve the robots of strenuous computations.</p><p>A few drawbacks with the method are stressed, and the characteristics of problems that the method is suitable for are presented.</p> / QC 20100705
285

Parallel Paths of Equal Reliability Assessed using Multi-Criteria Selection for Identifying Priority Expendature

Hook, Tristan William January 2013 (has links)
This research project identifies some factors for the justification in having parallel network links of similar reliability. There are two key questions requiring consideration: 1) When is it optimal to have or create two parallel paths of equal or similar reliability? 2) How could a multi-criteria selection method be implemented for assigning expenditure? Asset and project management always have financial constraints and this requires a constant balancing of funds to priorities. Many methods are available to address these needs but two of the most common tools are risk assessment and economic evaluations. In principal both are well utilised and generally respected in the engineering community; when it compares parallel systems both tend to favour a single priority link, a single option. Practical conception also tends to support this concept as the expenditure strengthens one link well above the alternative. The example used to demonstrate the point that there is potential for parallel paths of equal or similar reliability is the Wellington link from near the airport (Troy Street) up the coast to Paekakariri. Both the local and highway options have various benefits of ease of travel to shopping facilities. Investigating this section provides several combinations from parallel highways to highway and local roads, so will have differing management criteria and associated land use. Generalised techniques are to be applied to the network. Risk is addressed as a reliability index figure that is preset to provide a consistent parameter (equal reliability) for each link investigated. Consequences are assessed with multi-criteria selection focusing on local benefits and shortcomings. Several models are used to build an understanding on how each consequence factor impacts on the overall model and to identify consequences of such a process. Economics are briefly discussed as the engineering community and funding is almost attributed to financial constraints. No specific analytical assessment has been completed. General results indicate there are supporting arguments to undertake a multi-selection criteria assessment while comparing parallel networks. Situations do occur when there is benefit for parallel networks of equal or similar reliability and therefore equal funding to both can be supported.
286

The design and performance of automatically-controlled feedforward amplifiers

Bennett, David William January 1995 (has links)
No description available.
287

An investigation into the professional development of teachers at higher education institutions in the United Arab Emirates

Abu Nasra, Juma A. January 2000 (has links)
No description available.
288

Profile-driven parallelisation of sequential programs

Tournavitis, Georgios January 2011 (has links)
Traditional parallelism detection in compilers is performed by means of static analysis and more specifically data and control dependence analysis. The information that is available at compile time, however, is inherently limited and therefore restricts the parallelisation opportunities. Furthermore, applications written in C – which represent the majority of today’s scientific, embedded and system software – utilise many lowlevel features and an intricate programming style that forces the compiler to even more conservative assumptions. Despite the numerous proposals to handle this uncertainty at compile time using speculative optimisation and parallelisation, the software industry still lacks any pragmatic approaches that extracts coarse-grain parallelism to exploit the multiple processing units of modern commodity hardware. This thesis introduces a novel approach for extracting and exploiting multiple forms of coarse-grain parallelism from sequential applications written in C. We utilise profiling information to overcome the limitations of static data and control-flow analysis enabling more aggressive parallelisation. Profiling is performed using an instrumentation scheme operating at the Intermediate Representation (Ir) level of the compiler. In contrast to existing approaches that depend on low-level binary tools and debugging information, Ir-profiling provides precise and direct correlation of profiling information back to the Ir structures of the compiler. Additionally, our approach is orthogonal to existing automatic parallelisation approaches and additional fine-grain parallelism may be exploited. We demonstrate the applicability and versatility of the proposed methodology using two studies that target different forms of parallelism. First, we focus on the exploitation of loop-level parallelism that is abundant in many scientific and embedded applications. We evaluate our parallelisation strategy against the Nas and Spec Fp benchmarks and two different multi-core platforms (a shared-memory Intel Xeon Smp and a heterogeneous distributed-memory Ibm Cell blade). Empirical evaluation shows that our approach not only yields significant improvements when compared with state-of- the-art parallelising compilers, but comes close to and sometimes exceeds the performance of manually parallelised codes. On average, our methodology achieves 96% of the performance of the hand-tuned parallel benchmarks on the Intel Xeon platform, and a significant speedup for the Cell platform. The second study, addresses the problem of partially sequential loops, typically found in implementations of multimedia codecs. We develop a more powerful whole-program representation based on the Program Dependence Graph (Pdg) that supports profiling, partitioning and codegeneration for pipeline parallelism. In addition we demonstrate how this enhances conventional pipeline parallelisation by incorporating support for multi-level loops and pipeline stage replication in a uniform and automatic way. Experimental results using a set of complex multimedia and stream processing benchmarks confirm the effectiveness of the proposed methodology that yields speedups up to 4.7 on a eight-core Intel Xeon machine.
289

OPERATIONAL DECISION MAKING IN COMPOUND ENERGY SYSTEMS USING MULTI-LEVEL MULTI PARADIGM SIMULATION BASED OPTIMIZATION

Mazhari, Esfandyar M. January 2011 (has links)
A two level hierarchical simulation and decision modeling framework is proposed for electric power networks involving PV based solar generators, various storage, and grid connection. The high level model, from a utility company perspective, concerns operational decision making and defining regulations for customers for a reduced cost and enhanced reliability. The lower level model concerns changes in power quality and changes in demand behavior caused by customers' response to operational decisions and regulations made by the utility company at the high level. The higher level simulation is based on system dynamics and agent-based modeling while the lower level simulation is based on agent-based modeling and circuit-level continuous time modeling. The proposed two level model incorporates a simulation based optimization engine that is a combination of three meta-heuristics including Scatter Search, Tabu Search, and Neural Networks for finding optimum operational decision making. In addition, a reinforcement learning algorithm that uses Markov decision process tools is also used to generate decision policies. An integration and coordination framework is developed, which details the sequence, frequency, and types of interactions between two models. The proposed framework is demonstrated with several case studies with real-time or historical for solar insolation, storage units, demand profiles, and price of electricity of grid (i.e., avoided cost). Challenges that are addressed in case studies and applications include 1) finding a best policy, optimum price and regulation for a utility company while keeping the customers electricity quality within the accepted range, 2) capacity planning of electricity systems with PV generators, storage systems, and grid, and 3) finding the optimum threshold price that is used to decide how much energy should be bought from sold to grid to minimize the cost. Mathematical formulations, and simulation and decision modeling methodologies are presented. A grid-storage analysis is performed for arbitrage, to explore if in future it is going to be beneficial to use storage systems along with grid, with future technological improvement in storage and increasing cost of electrical energy. An information model is discussed that facilitates interoperability of different applications in the proposed hierarchical simulation and decision environment for energy systems.
290

The place of religion : spatialised subjectivities of Muslims, Sikhs and Christians in Southampton

Legg, Kristina Louise January 2000 (has links)
No description available.

Page generated in 0.0378 seconds