• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 678
  • 144
  • 138
  • 131
  • 26
  • 22
  • 13
  • 13
  • 13
  • 13
  • 13
  • 12
  • 8
  • 7
  • 2
  • Tagged with
  • 1289
  • 1289
  • 1289
  • 468
  • 326
  • 265
  • 265
  • 263
  • 261
  • 261
  • 261
  • 261
  • 261
  • 261
  • 261
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

Vektoranalytische Beschreibung eines diskret Massiven Systems

Schulze, Rainer W. 12 November 2012 (has links) (PDF)
Diskret Massive Systeme stellen eine hypothetische Alternative zu konventionellen technischen Informationsverarbeitungssystemen dar. Sie sind geeignet, die innere Physik eines dynamischen Prozesses unmittelbar nachzubilden, gewissermaßen “in vitro”. Dementsprechend durchschnittsfremd sind die Einzugsbereiche beider Systeme hinsichtlich ihrer Anwendung. In einem diskret Massiven System bewegen sich von Prozessor zu Prozessor Verkehrsströme, getrieben durch Referenzierungen zwischen den Prozessoren und zwangsgerichtet durch die Topologie des Verbindungsnetzwerkes zwischen den Prozessoren. Die Verarbeitungsleistung des Systems beruht auf der Überlagerung und Verdrängung von Verkehrsströmen. Beschreiben lassen sich diskret Massive Systeme vektoranalytisch als Diffusionsprozess mit Hilfe einer Fokker-Planck-Gleichung. Eine solche Gleichung wird für ein n-dimensionales Raumkontinuum aufgestellt und deren Parameter Diffusionskoeffizient und Beweglichkeitsvektor in einen m-dimensionalen orthonormalen diskreten Raum überführt, dem Aktionsraum eines diskret Massiven Systems. Verkehrsströme eines diskret Massiven Systems werden durch Korpuskelströme entlang von Flusslinien in einem diskreten Aktionsraum nachgebildet. Die beschreibenden Parameter Diffusionskoeffizient und Beweglichkeitsvektor für Korpuskelströme lassen sich durch Zeitmessungen ermitteln.
632

Effects of Shape, Letter Arrangements, and Practice on Text Entry on a Virtual Keyboard

O'Brien, Marita A. 22 May 2006 (has links)
This research study examined the design of a virtual keyboard that can be used for text entry with a rotary controller, particularly when users may differ in age and experience with a particular system. I specifically examined the shape and letter arrangement on the virtual keyboard to help determine the best features to use in a design. Two keyboard shapes, an Oval and a Plus, were selected to represent different aspects of the shape. Two keyboard arrangements, Alphabetic and a Standard QWERTY-based ordering, were selected to represent a well-known and less familiar arrangement. In the experiment, older and younger adults entered words over two consecutive days. Most of the time, they used either the Oval or the Plus, but they also used the alternate shape at specific points during their practice session to allow assessment of their ability to transfer what they had learned. At the end of the second day, they also used a variation of the practiced arrangement to examine how well they had learned the letter arrangement. Text entry performance on both shapes improved as a function of practice, demonstrating that participants could learn even unfamiliar devices and virtual keyboards to complete a word entry task. No overall shape effects were found for any level of performance, but shape did affect how participants learned and performed the word entry task. In particular, unique visual features on a shape may facilitate memorization of letter/visual cue mappings. These shape features are particularly important for older adults, as younger adults seem to develop a mental model that helps them memorize letter locations on either shape. With practice, older adults could achieve optimal performance levels with an Alphabetic keyboard on the Plus shape that has the more visually unique corners. In general, alphabetic ordering is best not only because it helped visual search, but also because it facilitated better movement planning. Overall, designers should consider creating unique visual features on a virtual keyboard that will blend with the compatibility and allowed movements for the selected device to create an effective virtual keyboard.
633

Design space pruning heuristics and global optimization method for conceptual design of low-thrust asteroid tour missions

Alemany, Kristina 13 November 2009 (has links)
Electric propulsion has recently become a viable technology for spacecraft, enabling shorter flight times, fewer required planetary gravity assists, larger payloads, and/or smaller launch vehicles. With the maturation of this technology, however, comes a new set of challenges in the area of trajectory design. Because low-thrust trajectory optimization has historically required long run-times and significant user-manipulation, mission design has relied on expert-based knowledge for selecting departure and arrival dates, times of flight, and/or target bodies and gravitational swing-bys. These choices are generally based on known configurations that have worked well in previous analyses or simply on trial and error. At the conceptual design level, however, the ability to explore the full extent of the design space is imperative to locating the best solutions in terms of mass and/or flight times. Beginning in 2005, the Global Trajectory Optimization Competition posed a series of difficult mission design problems, all requiring low-thrust propulsion and visiting one or more asteroids. These problems all had large ranges on the continuous variables - launch date, time of flight, and asteroid stay times (when applicable) - as well as being characterized by millions or even billions of possible asteroid sequences. Even with recent advances in low-thrust trajectory optimization, full enumeration of these problems was not possible within the stringent time limits of the competition. This investigation develops a systematic methodology for determining a broad suite of good solutions to the combinatorial, low-thrust, asteroid tour problem. The target application is for conceptual design, where broad exploration of the design space is critical, with the goal being to rapidly identify a reasonable number of promising solutions for future analysis. The proposed methodology has two steps. The first step applies a three-level heuristic sequence developed from the physics of the problem, which allows for efficient pruning of the design space. The second phase applies a global optimization scheme to locate a broad suite of good solutions to the reduced problem. The global optimization scheme developed combines a novel branch-and-bound algorithm with a genetic algorithm and an industry-standard low-thrust trajectory optimization program to solve for the following design variables: asteroid sequence, launch date, times of flight, and asteroid stay times. The methodology is developed based on a small sample problem, which is enumerated and solved so that all possible discretized solutions are known. The methodology is then validated by applying it to a larger intermediate sample problem, which also has a known solution. Next, the methodology is applied to several larger combinatorial asteroid rendezvous problems, using previously identified good solutions as validation benchmarks. These problems include the 2nd and 3rd Global Trajectory Optimization Competition problems. The methodology is shown to be capable of achieving a reduction in the number of asteroid sequences of 6-7 orders of magnitude, in terms of the number of sequences that require low-thrust optimization as compared to the number of sequences in the original problem. More than 70% of the previously known good solutions are identified, along with several new solutions that were not previously reported by any of the competitors. Overall, the methodology developed in this investigation provides an organized search technique for the low-thrust mission design of asteroid rendezvous problems.
634

Toward accurate and efficient outlier detection in high dimensional and large data sets

Nguyen, Minh Quoc 22 April 2010 (has links)
An efficient method to compute local density-based outliers in high dimensional data was proposed. In our work, we have shown that this type of outlier is present even in any subset of the dataset. This property is used to partition the data set into random subsets to compute the outliers locally. The outliers are then combined from different subsets. Therefore, the local density-based outliers can be computed efficiently. Another challenge in outlier detection in high dimensional data is that the outliers are often suppressed when the majority of dimensions do not exhibit outliers. The contribution of this work is to introduce a filtering method whereby outlier scores are computed in sub-dimensions. The low sub-dimensional scores are filtered out and the high scores are aggregated into the final score. This aggregation with filtering eliminates the effect of accumulating delta deviations in multiple dimensions. Therefore, the outliers are identified correctly. In some cases, the set of outliers that form micro patterns are more interesting than individual outliers. These micro patterns are considered anomalous with respect to the dominant patterns in the dataset. In the area of anomalous pattern detection, there are two challenges. The first challenge is that the anomalous patterns are often overlooked by the dominant patterns using the existing clustering techniques. A common approach is to cluster the dataset using the k-nearest neighbor algorithm. The contribution of this work is to introduce the adaptive nearest neighbor and the concept of dual-neighbor to detect micro patterns more accurately. The next challenge is to compute the anomalous patterns very fast. Our contribution is to compute the patterns based on the correlation between the attributes. The correlation implies that the data can be partitioned into groups based on each attribute to learn the candidate patterns within the groups. Thus, a feature-based method is developed that can compute these patterns efficiently.
635

Multi-dimensional optimization for cloud based multi-tier applications

Jung, Gueyoung 09 November 2010 (has links)
Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these applications at a very fine granularity. Meanwhile, resource virtualization has recently gained considerable attention in the design of computer systems and become a key ingredient for cloud computing. It provides significant improvement of aggregated power efficiency and high resource utilization by enabling resource consolidation. It also allows infrastructure providers to manage their resources in an agile way under highly dynamic conditions. However, these trends also raise significant challenges to researchers and practitioners to successfully achieve agile resource management in consolidated environments. First, they must deal with very different responsiveness of different applications, while handling dynamic changes in resource demands as applications' workloads change over time. Second, when provisioning resources, they must consider management costs such as power consumption and adaptation overheads (i.e., overheads incurred by dynamically reconfiguring resources). Dynamic provisioning of virtual resources entails the inherent performance-power tradeoff. Moreover, indiscriminate adaptations can result in significant overheads on power consumption and end-to-end performance. Hence, to achieve agile resource management, it is important to thoroughly investigate various performance characteristics of deployed applications, precisely integrate costs caused by adaptations, and then balance benefits and costs. Fundamentally, the research question is how to dynamically provision available resources for all deployed applications to maximize overall utility under time-varying workloads, while considering such management costs. Given the scope of the problem space, this dissertation aims to develop an optimization system that not only meets performance requirements of deployed applications, but also addresses tradeoffs between performance, power consumption, and adaptation overheads. To this end, this dissertation makes two distinct contributions. First, I show that adaptations applied to cloud infrastructures can cause significant overheads on not only end-to-end response time, but also server power consumption. Moreover, I show that such costs can vary in intensity and time scale against workload, adaptation types, and performance characteristics of hosted applications. Second, I address multi-dimensional optimization between server power consumption, performance benefit, and transient costs incurred by various adaptations. Additionally, I incorporate the overhead of the optimization procedure itself into the problem formulation. Typically, system optimization approaches entail intensive computations and potentially have a long delay to deal with a huge search space in cloud computing infrastructures. Therefore, this type of cost cannot be ignored when adaptation plans are designed. In this multi-dimensional optimization work, scalable optimization algorithm and hierarchical adaptation architecture are developed to handle many applications, hosting servers, and various adaptations to support various time-scale adaptation decisions.
636

Verifiable and redactable medical documents

Brown, Jordan Lee 16 July 2012 (has links)
The objective of the proposed research is to answer the question of how to provide verification and redactability to medical documents at a manageable computation cost to all parties involved. The approach for this solution examines the use of Merkle Hash Trees to provide the redaction and verification characteristics required. Using the Merkle Hash Tree, various Continuity of Care Documents will have their various elements extracted for storage in the signature scheme. An analysis of the approach and the various characteristics that made this approach a likely candidate for success are provided within. A description of a framework implementation and a sample application are provided to demonstrate potential uses of the system. Finally, results seen from various experiments with the framework are included to provide concrete evidence of a solution to the question which was the focus of this research.
637

Prediction based load balancing heuristic for a heterogeneous cluster

Saranyan, N 09 1900 (has links)
Load balancing has been a topic of interest in both academia and industry, mainly because of the scope for performance enhancement that is available to be exploited in many parallel and distributed processing environments. Among the many approaches that have been used to solve the load balancing problem, we find that only very few use prediction of code execution times. Our reasoning for this is that the field of code prediction is in its infancy. As of this writing, we are not aware of any prediction-based load balancing approach that uses prediction8 of code-execution times, and uses neither the information provided by the user, nor an off-line step that does the prediction, the results of which are then used at run-time. In this context, it is important to note that prior studies have indicated the feasibility of predicting the CPU requirements of general application programs. Our motivation in using prediction-based load balancing is to determine the feasibility of the approach. The reasoning behind that is the following: if prediction-based load balancing does yield good performance, then it may be worthwhile to develop a predictor that can give a rough estimate of the length of the next CPU burst of each process. While high accuracy of the predictor is not essential, the computation overhead of the predictor must be sufficiently' small, so as not to offset the gain of load balancing. As for the system, we assume a set of autonomous computers, that are connected by a fast, shared medium. The individual nodes can vary in the additional hardware and software that may be available in them. Further, we assume that the processes in the workload are sequential. The first step is to fix the parameters for our assumed predictor. Then, an algorithm that takes into account the characteristics of the predictor is proposed. There are many trade-off decisions in the design of the algorithm, including certain steps in which we have relied on trial and error method to find suitable values. The next logical step is to verify the efficiency of the algorithm. To assess its performance, we carry out event driven simulation. We also evaluate the robustness of the algorithm with respect to the characteristics of the predictor. The contribution of the thesis is as follows: It proposes a load-balancing algorithm for a heterogeneous cluster of workstations connected by a fast network. The simulation assumes that the heterogeneity is limited to variability in processor clock rates; but the algorithm can be applied when the nodes have other types of heterogeneity as well. The algorithm uses prediction of CPU burst lengths as its basic input unit. The performance of the algorithm is evaluated through event driven simulation using assumed workload distributions. The results of the simulation show that the algorithm yields a good improvement in response times over the scenario in which no load redistribution is done.
638

Effects of retinal disparity depth cues on cognitive workload in 3-D displays /

Gooding, Linda Wells, January 1991 (has links)
Thesis (Ph. D.)--Virginia Polytechnic Institute and State University, 1991. / Vita. Abstract. Includes bibliographical references (leaves 174-179). Also available via the Internet
639

Global synchronization of asynchronous computing systems

Barnes, Richard Neil. January 2001 (has links)
Thesis (M.S.)--Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
640

Remote data backup system for disaster recovery /

Lin, Hua. January 2004 (has links)
Thesis (M.S.)--University of Hawaii at Manoa, 2004. / Includes bibliographical references (leaves 64-66). Also available via World Wide Web.

Page generated in 0.1289 seconds