• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 424
  • 73
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 14
  • 7
  • 5
  • 5
  • 3
  • 3
  • Tagged with
  • 674
  • 674
  • 274
  • 219
  • 195
  • 153
  • 128
  • 123
  • 97
  • 83
  • 80
  • 67
  • 56
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Lagring och visualisering av information om stötdämpare

Settlin, Johan, Ekelund, Joar January 2019 (has links)
Att genom simuleringar få en förståelse för hur en stötdämpares inställningar påverkar dess egenskaper kan leda till förbättrad väghållning, ökad trafiksäkerhet samt snabbare varvtider på racerbanan. Genom att visualisera de simulerade data för att ge användare en uppfattning om hur inställningarna på stötdämparen kommer att bete sig i praktiken.Det här arbetet hade som mål att utforma en databas som efterliknar en stötdämpares egenskaper samt att visualisera dessa egenskaper på en webbsida. Kravinsamling gjordes genom intervjuer med experter och information införskaffades via litteraturstudier. Utifrån insamlade krav och fallstudier utvecklades en relationsdatabas som innehåller information om en dämpares komponenter och uppbyggnad samt ett visualiseringsverktyg där egenskaperna hos dämparen visualiserades på en webbsida. Databasen och visualiseringsverktyget sammanfogades sedan till en prototyp för att möjliggöra simulering av en dämpares egenskaper på webben.Resultatet av fallstudierna visade att databashanteringssystemet MySQL och grafbiblioteket Chart.js var bäst lämpade för prototypen utifrån de insamlade kraven. Funktionaliteten av protypen validerades av projektets uppdragsgivare och felmarginalen för simuleringarna var under 1%. Detta implicerar att databasmodellen som tagits fram håller god kvalitet och att resultatet visualiseras på ett korrekt och förståeligt sätt. / By perform simulations to achieve an understanding of how a shock absorbers setting affect its characteristics could result in improved road holding, increased roadworthiness and faster lap times at the racetrack. By visualizing the simulated data, users can get an understanding in how the settings on the shock absorber will behave.This work had as a goal to design a database that mimic a shock absorbers characteristic and to visualize these characteristics on a website. Requirements was gathered through interviews with experts and information was procured through literature studies. From the gathered requirements and case studies a relational database, that contain information about a shock absorbers components and construction, was developed. A visualization tool to visualize the characteristics of a shock absorber was also developed. The database and the visualization tool where then joined to create a prototype for simulating a shock absorbers characteristic on the web.The result from the case studies indicated that the database management system MySQL and the graph library Chart.js was best suited for the prototype, based on the collected requirements. The functionality of the prototype was validated by the client and the margin of error for the simulation was below 1%. This implies that the database model that has been produced is of good quality and that the visualization of the result is presented in a correct and apprehensible manner.
582

Experiment Management for the Problem Solving Environment WBCSim

Shu, Jiang 31 August 2009 (has links)
A problem solving environment (PSE) is a computational system that provides a complete and convenient set of high level tools for solving problems from a specific domain. This thesis takes an in-depth look at the experiment management aspect of PSEs, which can be divided into three levels: 1) data management, 2) change management, and 3) execution management. At the data management level, anything related to an experiment (computer simulation) should be stored and documented. A database management system can be used to store the simulation runs for a PSE. Then various high level interfaces can be provided to allow users to save, retrieve, search, and compare these simulation runs. At the change management level, a scientist should only focus on how to solve a problem in the experiment domain. Aside from running experiments, a scientist may only consider how to define a new model, how to modify an existing model, and how to interpret an experiment result. By using XML to describe a simulation model and unify various implementation layers, changing an existing model in a PSE can be intuitive and fast. At the execution management level, how an experiment is executed is the main concern. By providing a computational steering capability, a scientist can pause, examine, and compare the intermediate results from a simulation. Contrasted with the traditional way of running a lengthy simulation to see the result at the end, computational steering can leverage the user's expert knowledge on the fly (during the simulation run) and provide new insights and new product design opportunities. This thesis illustrates these concepts and implementation by using WBCSim as an example. WBCSim is a PSE that increases the productivity of wood scientists conducting research on wood-based composite materials and manufacturing processes. It integrates Fortran 90 simulation codes with a Web based graphical front end, an optimization tool, and various visualization tools. The WBCSim project was begun in 1997 with support from United States Department of Agriculture, Department of Energy, and Virginia Tech. It has since been used by students in several wood science classes, by graduate students and faculty, and by researchers at several forest products companies. WBCSim also serves as a test bed for the design, construction, and evaluation of useful, production quality PSEs. / Ph. D.
583

The formative evaluation and revision of an instructional management system for business computer competencies

Eason, Andrea Emmot 06 June 2008 (has links)
The purpose of this project was to (1) evaluate and revise a computer-based instructional management system developed to organize business computer competencies, and (2) develop and revise documentation for using the system. The instructional management system consists of a database and various applications employing relational database architecture. The resulting system will be used by Virginia business teachers in implementing their curricula. The prototype system was developed initially to organize a taxonomy of tasks identified to measure computer competencies. The computer competencies were extracted from the Business Education Suggested Course Competencies and Performance Objectives, published by the Virginia Department of Education in 1989. The taxonomy resulted in the publication of the Business Computer Software Curriculum Series in 1990. This latter publication forms the core of the instructional management system. The 1990 curriculum guide was ultiloately expanded to include multiple choice and matching test questions organized to measure the tasks. / Ed. D.
584

Publication of the Bibliographies on the World Wide Web

Moral, Ibrahim Utku 28 January 1997 (has links)
Every scientific research begins with a literature review that includes an extensive bibliographic search. Such searches are known to be difficult and time-consuming because of the vast amount of topical material existing in today's ever-changing technology base. Keeping up-to-date with related literature and being aware of the most recent publications require extensive time and effort. The need for a WWW-based software tool for collecting and providing access to this scientific body of knowledge is undeniable. The study explained herein deals with this problem by developing an efficient, advanced, easy-to-use tool, WebBiblio, that provides a globally accessible WWW environment enabling the collection and dissemination of searchable bibliographies comprised of abstracts and keywords. This thesis describes the design, structure and features of WebBiblio, and explains the ideas and approaches used in its development. The developed system is not a prototype, but a production system that exploits the capabilities of the WWW. Currently, it is used to publish three VV&T bibliographies at the WWW site: http://manta.cs.vt.edu/biblio. With its rich set of features and ergonomically engineered interface, WebBiblio brings a comprehensive solution to solving the problem of globally collecting and providing access to a diverse set of bibliographies. / Master of Science
585

Performance and reliability optimisation of a data acquisition and logging system in an integrated component-handling environment

Bothma, Bernardus Christian 02 1900 (has links)
Thesis (M. Tech.) - Central University of Technology, Free State, 2011
586

A distribution model for the assessment of database systems knowledge and skills among second-year university students

Meiring, Linda 01 1900 (has links)
Thesis (M. Tech.) - Central University of Technology, Free State, 2009
587

Routeplanner: a model for the visualization of warehouse data

Gouws, Patricia Mae 31 December 2008 (has links)
This study considers the details of development and use of a model of the visualization process to transform data in a warehouse to required insight. In the context of this study, `visualization process' refers to a step-wise methodology to develop enhanced insight by using visualization techniques. The model, named RoutePlanner, was developed by the researcher from a theoretical perspective and was then used and evaluated practically in the domain of insurance brokerage. The study highlights the proposed model, which comprises stages for the identification of the relevant data, selection of visualization methods and evaluation of the visualizations, undergirded by a set of practical guidelines. To determine the effect of the use of RoutePlanner an experiment was conducted to test a theory. The practical utility of RoutePlanner was assessed during an evaluation-of-use study. The goal of this study is to present the RoutePlanner model and the effect of its use. / Theoretical Computing / M.Sc. (Information Systems)
588

Optimized approach to decision fusion of heterogeneous data for breast cancer diagnosis.

Jesneck, JL, Nolte, LW, Baker, JA, Floyd, CE, Lo, JY 08 1900 (has links)
As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p < 0.02) and achieved AUC=0.85 +/- 0.01. The DF-P surpassed the other classifiers in terms of pAUC (p < 0.01) and reached pAUC=0.38 +/- 0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p < 0.04) and achieved AUC=0.94 +/- 0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57 +/- 0.07 to 0.67 +/- 0.05, p > 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p < 0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets. / Dissertation
589

Information Management in Local Area Networks: Impact on Users' Perceptions

Norton, Melanie J. 05 1900 (has links)
In this study, computer human interaction factors are examined as a possible source of information to aid in the operation and management of local area computer networks. Users' perceptions of computer performance and response time are evaluated in relation to specific modifications in the information organization of a file server in a local area network configuration running in Novell 3.11.
590

Efficient Processing of Range Queries in Main Memory

Sprenger, Stefan 11 March 2019 (has links)
Datenbanksysteme verwenden Indexstrukturen, um Suchanfragen zu beschleunigen. Im Laufe der letzten Jahre haben Forscher verschiedene Ansätze zur Indexierung von Datenbanktabellen im Hauptspeicher entworfen. Hauptspeicherindexstrukturen versuchen möglichst häufig Daten zu verwenden, die bereits im Zwischenspeicher der CPU vorrätig sind, anstatt, wie bei traditionellen Datenbanksystemen, die Zugriffe auf den externen Speicher zu optimieren. Die meisten vorgeschlagenen Indexstrukturen für den Hauptspeicher beschränken sich jedoch auf Punktabfragen und vernachlässigen die ebenso wichtigen Bereichsabfragen, die in zahlreichen Anwendungen, wie in der Analyse von Genomdaten, Sensornetzwerken, oder analytischen Datenbanksystemen, zum Einsatz kommen. Diese Dissertation verfolgt als Hauptziel die Fähigkeiten von modernen Hauptspeicherdatenbanksystemen im Ausführen von Bereichsabfragen zu verbessern. Dazu schlagen wir zunächst die Cache-Sensitive Skip List, eine neue aktualisierbare Hauptspeicherindexstruktur, vor, die für die Zwischenspeicher moderner Prozessoren optimiert ist und das Ausführen von Bereichsabfragen auf einzelnen Datenbankspalten ermöglicht. Im zweiten Abschnitt analysieren wir die Performanz von multidimensionalen Bereichsabfragen auf modernen Serverarchitekturen, bei denen Daten im Hauptspeicher hinterlegt sind und Prozessoren über SIMD-Instruktionen und Multithreading verfügen. Um die Relevanz unserer Experimente für praktische Anwendungen zu erhöhen, schlagen wir zudem einen realistischen Benchmark für multidimensionale Bereichsabfragen vor, der auf echten Genomdaten ausgeführt wird. Im letzten Abschnitt der Dissertation präsentieren wir den BB-Tree als neue, hochperformante und speichereffziente Hauptspeicherindexstruktur. Der BB-Tree ermöglicht das Ausführen von multidimensionalen Bereichs- und Punktabfragen und verfügt über einen parallelen Suchoperator, der mehrere Threads verwenden kann, um die Performanz von Suchanfragen zu erhöhen. / Database systems employ index structures as means to accelerate search queries. Over the last years, the research community has proposed many different in-memory approaches that optimize cache misses instead of disk I/O, as opposed to disk-based systems, and make use of the grown parallel capabilities of modern CPUs. However, these techniques mainly focus on single-key lookups, but neglect equally important range queries. Range queries are an ubiquitous operator in data management commonly used in numerous domains, such as genomic analysis, sensor networks, or online analytical processing. The main goal of this dissertation is thus to improve the capabilities of main-memory database systems with regard to executing range queries. To this end, we first propose a cache-optimized, updateable main-memory index structure, the cache-sensitive skip list, which targets the execution of range queries on single database columns. Second, we study the performance of multidimensional range queries on modern hardware, where data are stored in main memory and processors support SIMD instructions and multi-threading. We re-evaluate a previous rule of thumb suggesting that, on disk-based systems, scans outperform index structures for selectivities of approximately 15-20% or more. To increase the practical relevance of our analysis, we also contribute a novel benchmark consisting of several realistic multidimensional range queries applied to real- world genomic data. Third, based on the outcomes of our experimental analysis, we devise a novel, fast and space-effcient, main-memory based index structure, the BB- Tree, which supports multidimensional range and point queries and provides a parallel search operator that leverages the multi-threading capabilities of modern CPUs.

Page generated in 0.0576 seconds