• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3195
  • 962
  • 514
  • 325
  • 277
  • 160
  • 74
  • 65
  • 60
  • 53
  • 52
  • 27
  • 26
  • 23
  • 23
  • Tagged with
  • 6980
  • 1288
  • 653
  • 647
  • 610
  • 567
  • 555
  • 477
  • 463
  • 416
  • 347
  • 346
  • 339
  • 330
  • 328
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

Mapping of recursive algorithms onto multi-rate arrays

Zheng, Yue-Peng 27 May 1994 (has links)
In this dissertation, multi-rate array (MRA) architecture and its synthesis are proposed and developed. Using multi-coordinate systems (MCS), a unified theory for mapping algorithms from their original algorithmic specifications onto multi-rate arrays is developed. A multi-rate array is a grid of processors in which each interconnection may have its own clock rate; operations with different complexities run at their own clock rate, thus increasing the throughput and efficiency. A class of algorithms named directional affine recurrence equations (DARE) is defined. The dependence space of a DARE can be decomposed into uniform and non-uniform subspaces. When projected along the non-uniform subspace, the resultant array structure is regular. Limitations and restrictions of this approach are investigated and a procedure for mapping DARE onto MRA is developed. To generalize this approach, synthesis theory is developed with initial specification as affine direct input output (ADIO) which aims at removing redundancies from algorithms. Most ADIO specifications are the original algorithmic specifications. A multi-coordinate systems (MCS) is used to present an algorithm's dependence structures. In a MCS system, the index spaces of the variables in an algorithm are defined relative to their own coordinate systems. Most traditionally considered irregular algorithms present regular dependence structures under MCS technique. Procedures are provided for transforming algorithms from original algorithmic specifications to their regular specifications. Multi-rate schedules and multi-rate timing functions are studied. The solution for multi-rate timing functions can be formulated as linear programming problems. Procedures are provided for mapping ADIOs onto multi-rate VLSI systems. Examples are provided to illustrate the synthesis of MRAs from DAREs and ADIOs. The first major contribution of this dissertation is the development of the concrete, executable MRA architectures. The second is the introduction of MCS system and its application in the development of the theory for synthesizing MRAs from original algorithmic specifications. / Graduation date: 1995
662

Exact and Heuristic Methods for the Weapon Target Assignment Problem

Ahuja, Ravindra K., Kumar, Arvind, Jha, Krishna, Orlin, James B. 02 April 2004 (has links)
The Weapon Target Assignment (WTA) problem is a fundamental problem arising in defense-related applications of operations research. This problem consists of optimally assigning n weapons to m targets so that the total expected survival value of the targets after all the engagements is minimum. The WTA problem can be formulated as a nonlinear integer programming problem and is known to be NP-complete. There do not exist any exact methods for the WTA problem which can solve even small size problems (for example, with 20 weapons and 20 targets). Though several heuristic methods have been proposed to solve the WTA problem, due to the absence of exact methods, no estimates are available on the quality of solutions produced by such heuristics. In this paper, we suggest linear programming, integer programming, and network flow based lower bounding methods using which we obtain several branch and bound algorithms for the WTA problem. We also propose a network flow based construction heuristic and a very large-scale neighborhood (VLSN) search algorithm. We present computational results of our algorithms which indicate that we can solve moderately large size instances (up to 80 weapons and 80 targets) of the WTA problem optimally and obtain almost optimal solutions of fairly large instances (up to 200 weapons and 200 targets) within a few seconds
663

Symbolic Construction of a 2D Scale-Space Image

Saund, Eric 01 April 1988 (has links)
The shapes of naturally occurring objects characteristically involve spatial events occurring at many scales. This paper offers a symbolic approach to constructing a primitive shape description across scales for 2D binary (silhouette) shape images: grouping operations are performed over collections of tokens residing on a Scale-Space Blackboard. Two types of grouping operations are identified that, respectively: (1) aggregate edge primitives at one scale into edge primitives at a coarser scale and (2) group edge primitives into partial-region assertions, including curved- contours, primitive-corners, and bars. This approach avoids several drawbacks of numerical smoothing methods.
664

Global Depth Perception from Familiar Scene Structure

Torralba, Antonio, Oliva, Aude 01 December 2001 (has links)
In the absence of cues for absolute depth measurements as binocular disparity, motion, or defocus, the absolute distance between the observer and a scene cannot be measured. The interpretation of shading, edges and junctions may provide a 3D model of the scene but it will not inform about the actual "size" of the space. One possible source of information for absolute depth estimation is the image size of known objects. However, this is computationally complex due to the difficulty of the object recognition process. Here we propose a source of information for absolute depth estimation that does not rely on specific objects: we introduce a procedure for absolute depth estimation based on the recognition of the whole scene. The shape of the space of the scene and the structures present in the scene are strongly related to the scale of observation. We demonstrate that, by recognizing the properties of the structures present in the image, we can infer the scale of the scene, and therefore its absolute mean depth. We illustrate the interest in computing the mean depth of the scene with application to scene recognition and object detection.
665

Employee perceptions on managing diversity in the workplace / S.G. Ralepeli

Ralepeli, Selebeli Gideon January 2008 (has links)
Thesis (M.B.A.)--North-West University, Potchefstroom Campus, 2009.
666

The Emergence of Community Gardens in Miami, Florida: Geographical Perspectives

Drake, Luke 01 January 2010 (has links)
Community gardens (CGs) have been well studied in several North American cities, but less is known about them in places with emerging CG movements. There are no existing studies on CGs in Miami and the total number of CGs in Miami is unknown, but in the past five years there has been rapid increase in interest on this topic from a variety of stakeholders and organizations. To add to the empirical knowledge of CGs, the author conducted case studies on the six highest profile projects. This exploratory research consisted of 12 semi-structured interviews and analysis of government records and published documents. The findings indicate CGs are very diverse in both their locations across socio-economic areas as well as the spatial strategies of their organizers. The multiple meanings of community and the multiple scales at which CGs are organized illustrate the complexities of such projects. Although CG advocates promote them as ways to achieve community self-reliance, recent critiques have argued that CGs offer some benefits but cannot redress large-scale inequalities. Perhaps these inadequacies are due in part to assumptions that localities are produced exclusively by their residents. This study draws on geographical theory to argue that a relational approach to scale may lead to a more accurate practice and help establish CGs as permanent parts of cities. It concludes that CGs are highly complex and are not simple solutions for community development, and that more care is needed in their advocacy.
667

MULTI-SCALE MODELING OF POLYMERIC MATERIALS: AN ATOMISTIC AND COARSE-GRAINED MOLECULAR DYNAMICS STUDY

Wang, Qifei 01 August 2011 (has links)
Computational study of the structural, thermodynamic and transport properties of polymeric materials at equilibrium requires multi-scale modeling techniques due to processes occurring across a broad spectrum of time and length scales. Classical molecular-level simulation, such as Molecular Dynamics (MD), has proved very useful in the study of polymeric oligomers or short chains. However, there is a strong, nonlinear dependence of relaxation time with respect to chain length that requires the use of less computationally demanding techniques to describe the behavior of longer chains. As one of the mesoscale modeling techniques, Coarse-grained (CG) procedure has been developed recently to extend the molecular simulation to larger time and length scales. With a CG model, structural and dynamics of long chain polymeric systems can be directly studied though CG level simulation. In the CG simulations, the generation of the CG potential is an area of current research activity. The work in this dissertation focused on both the development of techniques for generating CG potentials as well as the application of CG potentials in Coarse-grained Molecular Dynamics (CGMD) simulations to describe structural, thermodynamic and transport properties of various polymer systems. First, an improved procedure for generated CG potentials from structural data obtained from atomistic simulation of short chains was developed. The Ornstein-Zernike integral equation with the Percus Yevick approximation was invoked to solve this inverse problem (OZPY-1). Then the OZPY-1 method was applied to CG modeling of polyethylene terephthalate (PET) and polyethylene glycol (PEG). Finally, CG procedure was applied to a model of sulfonated and cross-linked Poly (1, 3-cyclohexadiene) (sxPCHD) polymer that is designed for future application as a proton exchange membrane material used in fuel cell. Through above efforts, we developed an understanding of the strengths and limitations of various procedures for generating CG potentials. We were able to simulate entangled polymer chains for PET and study the structure and dynamics as a function of chain length. The work here also provides the first glimpses of the nanoscale morphology of the hydrated sxPCHD membrane. An understanding of this structure is important in the prediction of proton conductivity in the membrane.
668

Spatially-Dependent Reactor Kinetics and Supporting Physics Validation Studies at the High Flux Isotope Reactor

Chandler, David 01 August 2011 (has links)
The computational ability to accurately predict the dynamic behavior of a nuclear reactor core in response to reactivity-induced perturbations is an important subject in the field of reactor physics. Space-time and point kinetics methodologies were developed for the purpose of studying the transient-induced behavior of the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor’s (HFIR) compact core. The space-time simulations employed the three-group neutron diffusion equations, which were solved via the COMSOL partial differential equation coefficient application mode. The point kinetics equations were solved with the PARET code and the COMSOL ordinary differential equation application mode. The basic nuclear data were generated by the NEWT and MCNP5 codes and transients initiated by control cylinder and hydraulic tube rabbit ejections were studied. The space-time models developed in this research only consider the neutronics aspect of reactor kinetics, and therefore, do not include fluid flow, heat transfer, or reactivity feedback. The research presented in this dissertation is the first step towards creating a comprehensive multiphysics methodology for studying the dynamic behavior of the HFIR core during reactivity-induced perturbations. The results of this study show that point kinetics is adequate for small perturbations in which the power distribution is assumed to be time-independent, but space-time methods must be utilized to determine localized effects. En route to developing the kinetics methodologies, validation studies and methodology updates were performed to verify the exercise of major neutronic analysis tools at the HFIR. A complex MCNP5 model of HFIR was validated against critical experiment power distribution and effective multiplication factor data. The ALEPH and VESTA depletion tools were validated against post-irradiation uranium isotopic mass spectrographic data for three unique full power cycles. A TRITON model was developed and used to calculate the buildup and reactivity worth of helium-3 in the beryllium reflector, determine whether discharged beryllium reflectors are at transuranic waste limits for disposal purposes, determine whether discharged beryllium reflectors can be reclassified from hazard category 1 waste to category 2 or 3 for transportation and storage purposes, and to calculate the curium target rod nuclide inventory following irradiation in the flux trap.
669

Clustering properties of low-redshift QSO absorption line systems towards the galactic poles /

Venden Berk, Daniel E. January 1997 (has links)
Thesis (Ph. D.)--University of Chicago, Dept. of Astronomy and Astrophysics, August 1997. / Includes bibliographical references. Also available on the Internet.
670

A VLSI architecture for a neurocomputer using higher-order predicates

Geller, Ronnie Dee 05 1900 (has links) (PDF)
M.S. / Computer Science & Engineering / Some biological aspects of neural interactions are presented and used as a basis for a computational model in the development of a new type of computer architecture. A VLSI microarchitecture is proposed that efficiently implements the neural-based computing methods. An analysis of the microarchitecture is presented to show that it is feasible using currently available VLSI technology. The performance expectations of the proposed system are analyzed and compared to conventional computer systems executing similar algorithms. The proposed system is shown to have comparatively attractive performance and cost/performance ratio characteristics. Some discussion is given on system level characteristics including initialization and learning.

Page generated in 0.0376 seconds