• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 33
  • 26
  • 16
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 321
  • 321
  • 91
  • 65
  • 63
  • 50
  • 45
  • 44
  • 41
  • 35
  • 33
  • 31
  • 29
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Design space pruning heuristics and global optimization method for conceptual design of low-thrust asteroid tour missions

Alemany, Kristina 13 November 2009 (has links)
Electric propulsion has recently become a viable technology for spacecraft, enabling shorter flight times, fewer required planetary gravity assists, larger payloads, and/or smaller launch vehicles. With the maturation of this technology, however, comes a new set of challenges in the area of trajectory design. Because low-thrust trajectory optimization has historically required long run-times and significant user-manipulation, mission design has relied on expert-based knowledge for selecting departure and arrival dates, times of flight, and/or target bodies and gravitational swing-bys. These choices are generally based on known configurations that have worked well in previous analyses or simply on trial and error. At the conceptual design level, however, the ability to explore the full extent of the design space is imperative to locating the best solutions in terms of mass and/or flight times. Beginning in 2005, the Global Trajectory Optimization Competition posed a series of difficult mission design problems, all requiring low-thrust propulsion and visiting one or more asteroids. These problems all had large ranges on the continuous variables - launch date, time of flight, and asteroid stay times (when applicable) - as well as being characterized by millions or even billions of possible asteroid sequences. Even with recent advances in low-thrust trajectory optimization, full enumeration of these problems was not possible within the stringent time limits of the competition. This investigation develops a systematic methodology for determining a broad suite of good solutions to the combinatorial, low-thrust, asteroid tour problem. The target application is for conceptual design, where broad exploration of the design space is critical, with the goal being to rapidly identify a reasonable number of promising solutions for future analysis. The proposed methodology has two steps. The first step applies a three-level heuristic sequence developed from the physics of the problem, which allows for efficient pruning of the design space. The second phase applies a global optimization scheme to locate a broad suite of good solutions to the reduced problem. The global optimization scheme developed combines a novel branch-and-bound algorithm with a genetic algorithm and an industry-standard low-thrust trajectory optimization program to solve for the following design variables: asteroid sequence, launch date, times of flight, and asteroid stay times. The methodology is developed based on a small sample problem, which is enumerated and solved so that all possible discretized solutions are known. The methodology is then validated by applying it to a larger intermediate sample problem, which also has a known solution. Next, the methodology is applied to several larger combinatorial asteroid rendezvous problems, using previously identified good solutions as validation benchmarks. These problems include the 2nd and 3rd Global Trajectory Optimization Competition problems. The methodology is shown to be capable of achieving a reduction in the number of asteroid sequences of 6-7 orders of magnitude, in terms of the number of sequences that require low-thrust optimization as compared to the number of sequences in the original problem. More than 70% of the previously known good solutions are identified, along with several new solutions that were not previously reported by any of the competitors. Overall, the methodology developed in this investigation provides an organized search technique for the low-thrust mission design of asteroid rendezvous problems.
122

High-performance algorithms and software for large-scale molecular simulation

Liu, Xing 08 June 2015 (has links)
Molecular simulation is an indispensable tool in many different disciplines such as physics, biology, chemical engineering, materials science, drug design, and others. Performing large-scale molecular simulation is of great interest to biologists and chemists, because many important biological and pharmaceutical phenomena can only be observed in very large molecule systems and after sufficiently long time dynamics. On the other hand, molecular simulation methods usually have very steep computational costs, which limits current molecular simulation studies to relatively small systems. The gap between the scale of molecular simulation that existing techniques can handle and the scale of interest has become a major barrier for applying molecular simulation to study real-world problems. In order to study large-scale molecular systems using molecular simulation, it requires developing highly parallel simulation algorithms and constantly adapting the algorithms to rapidly changing high performance computing architectures. However, many existing algorithms and codes for molecular simulation are from more than a decade ago, which were designed for sequential computers or early parallel architectures. They may not scale efficiently and do not fully exploit features of today's hardware. Given the rapid evolution in computer architectures, the time has come to revisit these molecular simulation algorithms and codes. In this thesis, we demonstrate our approach to addressing the computational challenges of large-scale molecular simulation by presenting both the high-performance algorithms and software for two important molecular simulation applications: Hartree-Fock (HF) calculations and hydrodynamics simulations, on highly parallel computer architectures. The algorithms and software presented in this thesis have been used by biologists and chemists to study some problems that were unable to solve using existing codes. The parallel techniques and methods developed in this work can be also applied to other molecular simulation applications.
123

The Gander search engine for personalized networked spaces

Michel, Jonas Reinhardt 05 March 2013 (has links)
The vision of pervasive computing is one of a personalized space populated with vast amounts of data that can be exploited by humans. Such Personalized Networked Spaces (PNetS) and the requisite support for general-purpose expressive spatiotemporal search of the “here” and “now” have eluded realization, due primarily to the complexities of indexing, storing, and retrieving relevant information within a vast collection of highly ephemeral data. This thesis presents the Gander search engine, founded on a novel conceptual model of search in PNetS and targeted for environments characterized by large volumes of highly transient data. We overview this model and provide a realization of it via the architecture and implementation of the Gander search engine. Gander connects formal notions of sampling a search space to expressive, spatiotemporal-aware protocols that perform distributed query processing in situ. This thesis evaluates Gander through a user study that examines the perceived usability and utility of our mobile application, and benchmarks the performance of Gander in large PNetS through network simulation. / text
124

Dataflow parallelism for large scale data mining

Daruru, Srivatsava 20 December 2010 (has links)
The unprecedented and exponential growth of data along with the advent of multi-core processors has triggered a massive paradigm shift from traditional single threaded programming to parallel programming. A number of parallel programming paradigms have thus been proposed and have become pervasive and inseparable from any large production environment. Also with the massive amounts of data available and with the ever increasing business need to process and analyze this data quickly at the minimum cost, there is much more demand for implementing fast data mining algorithms on cheap hardware. This thesis explores a parallel programming model called dataflow, the essence of which is computation organized by the flow of data through a graph of operators. This paradigm exhibits pipeline, horizontal and vertical parallelism and requires only the data of the active operators in memory at any given time allowing it to scale easily to very large datasets. The thesis describes the dataflow implementation of two data mining applications on huge datasets. We first develop an efficient dataflow implementation of a Collaborative Filtering (CF) algorithm based on weighted co-clustering and test its effectiveness on a large and sparse Netflix data. This implementation of the recommender system was able to rapidly train and predict over 100 million ratings within 17 minutes on a commodity multi-core machine. We then describe a dataflow implementation of a non-parametric density based clustering algorithm called Auto-HDS to automatically detect small and dense clusters on a massive astronomy dataset. This implementation was able to discover dense clusters at varying density thresholds and generate a compact cluster hierarchy on 100k points in less than 1.3 hours. We also show its ability to scale to millions of points as we increase the number of available resources. Our experimental results illustrate the ability of this model to “scale” well to massive datasets and its ability to rapidly discover useful patterns in two different applications. / text
125

Large Scale Parallel Inference of Protein and Protein Domain families

Rezvoy, Clément 28 September 2011 (has links) (PDF)
Protein domains are recurring independent segment of proteins. The combinatorial arrangement of domains is at the root of the functional and structural diversity of proteins. Several methods have been developed to infer protein domain decomposition and domain family clustering from sequence information alone. MkDom2 is one of those methods. Mkdom2 infers domain families in a greedy fashion. Families are inferred one after the other in order to create a delineation of domains on proteins and a clustering of those domains in families. MkDom2 is instrumental in the building of the ProDom database. The exponential growth of the number of sequences to process as rendered MkDom2 obsolete, it would now take several years to compute a newrelease of ProDom. We present a nous algorithm, MPI_MkDom2, allowing computation of several families at once across a distributed computing platform. MPI_MkDom2 is an asynchronous distributed algorithm managing load balancing to ensure efficient platform usage; it ensures the creation of a non-overlapping partitioning of the whole protein set. A new proximity measure is defined to assess the effect of the parallel computation on the result. We also Propose a second algorithm, MPI_mkDom3, allowing the simultaneous computation of a clustering of protein domains as well as full protein sharing the same domain decomposition.
126

Paskirstytų skaičiavimų įtaka fizikos uždavinių sprendimų spartai / Distributed computing impact on performance for solving physics tasks

Kvietkauskas, Gediminas 26 August 2013 (has links)
Iš programuotojo perspektyvos, riba tarp aparatūrinės ir programinės įrangos sparčiai mažėja. Kol programuotojai stengiasi pasiekti reikalaujamą spartą šiuolaikinėms sistemoms, jiems teks išnaudoti alternatyvius skaičiavimų elementus, tokius kaip vaizdo plokštes. Šiame darbe apžvelgiama esama lygiagrečių skaičiavimų naudojant vaizdo plokštes situacija. Apžvelgiamas fizikos simuliacijos skaičiavimo uždavinių sprendimas panaudojant vaizdo plokštes kaip skaičiavimo vienetus. Analizuojant esamus produktus ir pritaikant technologijas paskirstytiems skaičiavimams naudojant OpenCL atliekami eksperimentai. Šie eksperimentai parodys teigiamas ir neigiamas, paskirstytų skaičiavimų fizikos uždaviniams spręsti, puses. / The line between hardware and software is shrinking. While programmers and developers desperately try to reach the required performance for applications that require huge computations, they will be forced to use multiprocessing. Not only using a CPU as a main computation unit, but other resources like GPU. In these theses we will review the current situation in multiprocessing using GPU‘s. An investigation of computation of physics will be carried out. While analyzing current products and applying technologies for GPU computation using OpenCL we will execute appropriate experiments. These experiments will show the up and down sides of GPU computing for specific physics tasks.
127

PROVIDING A PERSISTENT SPACE PLUG-AND-PLAY AVIONICS NETWORK ON THE INTERNATIONAL SPACE STATION

Jacobs, Zachary A. 01 January 2013 (has links)
The CubeLab is a new payload standard that greatly improves access to the International Space Station (ISS) for small, rapid turn-around microgravity experiments. CubeLabs are small (less than 16”x8”x4” and under 10kg) modular payloads that interface with the NanoRacks Platform aboard the ISS. CubeLabs receive power from the station and transfer data using the standard terrestrial plug-and-play Universal Serial Bus (USB). The Space Plug-and-play Avionics (SPA) architecture is a modular technology for spacecraft that provides an infrastructure for modular satellite components to reduce the time to orbit and development costs for satellites. This paper describes the development of a bus capable of interfacing SPA-1 payloads in the CubeLab form-factor aboard the ISS. This CubeLab also provides the “discover and join” functionality that is necessary for a SPA-1 network of devices. This will ultimately provide persistent SPA capabilities on the ISS which will allow users to send SPA-1 devices to orbit for on-the-fly installation by astronauts.
128

PROPOSED MIDDLEWARE SOLUTION FOR RESOURCE-CONSTRAINED DISTRIBUTED EMBEDDED NETWORKS

Rexroat, Jason T 01 January 2014 (has links)
The explosion in processing power of embedded systems has enabled distributed embedded networks to perform more complicated tasks. Middleware are sets of encapsulations of common and network/operating system-specific functionality into generic, reusable frameworks to manage such distributed networks. This thesis will survey and categorize popular middleware implementations into three adapted layers: host-infrastructure, distribution, and common services. This thesis will then apply a quantitative approach to grading and proposing a single middleware solution from all layers for two target platforms: CubeSats and autonomous unmanned aerial vehicles (UAVs). CubeSats are 10x10x10cm nanosatellites that are popular university-level space missions, and impose power and volume constraints. Autonomous UAVs are similarly-popular hobbyist-level vehicles that exhibit similar power and volume constraints. The MAVLink middleware from the host-infrastructure layer is proposed as the middleware to manage the distributed embedded networks powering these platforms in future projects. Finally, this thesis presents a performance analysis on MAVLink managing the ARM Cortex-M 32-bit processors that power the target platforms.
129

Charge Transfer in Deoxyribonucleic Acid (DNA): Static Disorder, Dynamic Fluctuations and Complex Kinetic.

Edirisinghe Pathirannehelage, Neranjan S 07 January 2011 (has links)
The fact that loosely bonded DNA bases could tolerate large structural fluctuations, form a dissipative environment for a charge traveling through the DNA. Nonlinear stochastic nature of structural fluctuations facilitates rich charge dynamics in DNA. We study the complex charge dynamics by solving a nonlinear, stochastic, coupled system of differential equations. Charge transfer between donor and acceptor in DNA occurs via different mechanisms depending on the distance between donor and acceptor. It changes from tunneling regime to a polaron assisted hopping regime depending on the donor-acceptor separation. Also we found that charge transport strongly depends on the feasibility of polaron formation. Hence it has complex dependence on temperature and charge-vibrations coupling strength. Mismatched base pairs, such as different conformations of the G・A mispair, cause only minor structural changes in the host DNA molecule, thereby making mispair recognition an arduous task. Electron transport in DNA that depends strongly on the hopping transfer integrals between the nearest base pairs, which in turn are affected by the presence of a mispair, might be an attractive approach in this regard. I report here on our investigations, via the I –V characteristics, of the effect of a mispair on the electrical properties of homogeneous and generic DNA molecules. The I –V characteristics of DNA were studied numerically within the double-stranded tight-binding model. The parameters of the tight-binding model, such as the transfer integrals and on-site energies, are determined from first-principles calculations. The changes in electrical current through the DNA chain due to the presence of a mispair depend on the conformation of the G・A mispair and are appreciable for DNA consisting of up to 90 base pairs. For homogeneous DNA sequences the current through DNA is suppressed and the strongest suppression is realized for the G(anti)・A(syn) conformation of the G・A mispair. For inhomogeneous (generic) DNA molecules, the mispair result can be either suppression or an enhancement of the current, depending on the type of mispairs and actual DNA sequence.
130

Towards Fault Reactiveness in Wireless Sensor Networks with Mobile Carrier Robots

Falcon Martinez, Rafael Jesus 04 April 2012 (has links)
Wireless sensor networks (WSN) increasingly permeate modern societies nowadays. But in spite of their plethora of successful applications, WSN are often unable to surmount many operational challenges that unexpectedly arise during their lifetime. Fortunately, robotic agents can now assist a WSN in various ways. This thesis illustrates how mobile robots which are able to carry a limited number of sensors can help the network react to sensor faults, either during or after its deployment in the monitoring region. Two scenarios are envisioned. In the first one, carrier robots surround a point of interest with multiple sensor layers (focused coverage formation). We put forward the first known algorithm of its kind in literature. It is energy-efficient, fault-reactive and aware of the bounded robot cargo capacity. The second one is that of replacing damaged sensing units with spare, functional ones (coverage repair), which gives rise to the formulation of two novel combinatorial optimization problems. Three nature-inspired metaheuristic approaches that run at a centralized location are proposed. They are able to find good-quality solutions in a short time. Two frameworks for the identification of the damaged nodes are considered. The first one leans upon diagnosable systems, i.e. existing distributed detection models in which individual units perform tests upon each other. Two swarm intelligence algorithms are designed to quickly and reliably spot faulty sensors in this context. The second one is an evolving risk management framework for WSNs that is entirely formulated in this thesis.

Page generated in 0.0972 seconds