Spelling suggestions: "subject:"datastorage"" "subject:"daystorage""
201 |
Integrované řešení diagnostiky výrobního zařízení v energetice České republiky / INTEGRATED SOLUTION OF DIAGNOSTICS OF PRODUCTION EQUIPMENT IN THE CZECH ENERGY INDUSTRYCvešpr, Pavel January 2013 (has links)
This dissertation thesis is concerned with the diagnostics of the main production facilities in Czech power engineering with a focus on its integrating role in the process of gaining, saving and processing information for the purpose of evaluating the technical state of the operated facilities and the plan to manage their lifetime. It is divided into two parts, theoretical and practical. The theoretical part presents the conclusions of examination of the needs of involved workers in the areas of diagnostics, maintenance and expert assessment of technical state of equipment. These conclusions were formulated based on the completed analysis of the current status ("as - is" analysis) of performing diagnostics of systems operated in the technological units of both nuclear and classical power engineering. The monitored equipment involves electrical installations and machinery, steel and building constructions, measuring instruments and vibrodiagnostics. Based on the analysis results, process diagrams are created for the solution of partial tasks. The analysis of the proposed solution for problems in question ("should - be" analysis) includes a design of the fundamental scheme of the data model for a software solution and a design of data flows from the individual data sources. The following part presents an application layer which includes a detailed description of major functionalities. Further, important activities and procedures are described that are necessary to evaluate the technical state of equipment. The practical part deals with the implementation of the LTO suite software product in the environment of power engineering in Czech Republic, specifically within ČEZ, a.s.. The LTO suite functionality is demonstrated in this part of the thesis by screens recorded within the LTO suite individual modules, which work above the actual data. It starts with the initial screen for configuration of displayed data, continues to present examples of the equipment register, planning, processing and saving of information collected through the diagnostic activities over to the module of integration – analytical layer, which is designed for evaluation of the technical state of equipment at the entered date with a reporting output. The thesis also includes the chapters on "Aims of the Study" and "Conclusion". The key chapter presents the "Benefits of the Study", whose overview describes the original results of the research as well as those applicable also outside the power engineering area.
|
202 |
A PROBABILISTIC MACHINE LEARNING FRAMEWORK FOR CLOUD RESOURCE SELECTION ON THE CLOUDKhan, Syeduzzaman 01 January 2020 (has links) (PDF)
The execution of the scientific applications on the Cloud comes with great flexibility, scalability, cost-effectiveness, and substantial computing power. Market-leading Cloud service providers such as Amazon Web service (AWS), Azure, Google Cloud Platform (GCP) offer various general purposes, memory-intensive, and compute-intensive Cloud instances for the execution of scientific applications. The scientific community, especially small research institutions and undergraduate universities, face many hurdles while conducting high-performance computing research in the absence of large dedicated clusters. The Cloud provides a lucrative alternative to dedicated clusters, however a wide range of Cloud computing choices makes the instance selection for the end-users. This thesis aims to simplify Cloud instance selection for end-users by proposing a probabilistic machine learning framework to allow to users select a suitable Cloud instance for their scientific applications.
This research builds on the previously proposed A2Cloud-RF framework that recommends high-performing Cloud instances by profiling the application and the selected Cloud instances. The framework produces a set of objective scores called the A2Cloud scores, which denote the compatibility level between the application and the selected Cloud instances. When used alone, the A2Cloud scores become increasingly unwieldy with an increasing number of tested Cloud instances. Additionally, the framework only examines the raw application performance and does not consider the execution cost to guide resource selection. To improve the usability of the framework and assist with economical instance selection, this research adds two Naïve Bayes (NB) classifiers that consider both the application’s performance and execution cost. These NB classifiers include: 1) NB with a Random Forest Classifier (RFC) and 2) a standalone NB module.
Naïve Bayes with a Random Forest Classifier (RFC) augments the A2Cloud-RF framework's final instance ratings with the execution cost metric. In the training phase, the classifier builds the frequency and probability tables. The classifier recommends a Cloud instance based on the highest posterior probability for the selected application.
The standalone NB classifier uses the generated A2Cloud score (an intermediate result from the A2Cloud-RF framework) and execution cost metric to construct an NB classifier. The NB classifier forms a frequency table and probability (prior and likelihood) tables. For recommending a Cloud instance for a test application, the classifier calculates the highest posterior probability for all of the Cloud instances. The classifier recommends a Cloud instance with the highest posterior probability. This study performs the execution of eight real-world applications on 20 Cloud instances from AWS, Azure, GCP, and Linode. We train the NB classifiers using 80% of this dataset and employ the remaining 20% for testing. The testing yields more than 90% recommendation accuracy for the chosen applications and Cloud instances. Because of the imbalanced nature of the dataset and multi-class nature of classification, we consider the confusion matrix (true positive, false positive, true negative, and false negative) and F1 score with above 0.9 scores to describe the model performance. The final goal of this research is to make Cloud computing an accessible resource for conducting high-performance scientific executions by enabling users to select an effective Cloud instance from across multiple providers.
|
203 |
A comparative study of the Data Warehouse and Data Lakehouse architecture / En komparativ studie av Data Warehouse- och Data Lakehouse-arkitekturSalqvist, Philip January 2024 (has links)
This thesis aimed to assess a given Data Warehouse against a well-suited Data Lakehouse in terms of read performance and scalability. Using the TPC-DS benchmark, these systems were tested with synthetic datasets reflecting the specific needs of a Decision Support (DSS) system. Moreover, this research aimed to determine whether certain categories of queries resulted in notably large discrepancies between the systems. This might help pinpoint the architectural differences that cause these discrepancies. Initial research identified BigQuery and Delta Lake as top candidates due to their exceptional read performance and scalability, prompting further investigation into both. The most significant latency difference was noted in the initial benchmark using a dataset scale of 2 GB, with BigQuery outperforming Delta Lake. As the dataset size grew, BigQuery’s latency increased by 336%, while Delta Lake’s went up by just 40%. However, BigQuery still maintained a significant overall lower latency across all scales. Detailed query analysis showed BigQuery excelling especially with complex queries, those involving extensive aggregation and multiple join operations, which have a high potential for generating large intermediate data during the shuffle stage. It was hypothesized that some of the read performance discrepancies could be attributed to BigQuery’s in-memory shuffling capability, whereas Delta Lake might spill intermediate data to the disk. Delta Lake’s hardware utilization metrics further supported this theory, displaying a trend where peaks in memory usage and disk write rate coincided with queries showing high discrepancies. Meanwhile, CPU utilization remained low. This pattern suggests an I/O-bound system rather than a CPU-bound one, possibly explaining the observed performance differences. Future studies are encouraged to explicitly monitor shuffle operations, aiming for a more rigorous correlation between high-discrepancy queries and data spillage during the shuffle phase. Further research should also include larger dataset sizes; this thesis was constrained to a maximum dataset size of 64 GB due to limited resources. / Denna uppsats undersökte ett givet Data Warehouse i jämförelse med ett lämpligt Data Lakehouse med fokus på läsprestanda och skalbarhet. Med hjälp av TPC-DS benchmark testades dessa system med syntetiska dataset som speglade kundens specifika behov. Vidare syftade forskningen till att avgöra om vissa kategorier av queries resulterade i märkbart stora skillnader mellan systemen. Detta för att identifiera de teknologiska aspekter hos systemen som orsakar dessa skillnader. Den inledande litteraturstudien identifierade BigQuery och Delta Lake som toppkandidater på grund av deras läsprestanda och skalbarhet, vilket ledde till ytterligare undersökning av båda. Den mest påtagliga skillnaden i latens noterades i den initiala jämförelsen med ett dataset av storleken 2 GB, där BigQuery presterade bättre än Delta Lake. När datamängden skalades upp, ökade BigQuery’s latens med 336%, medan Delta Lakes ökade med endast 40%. Dock bibehöll BigQuery en avsevärt lägre total latens för samtliga datamängder. Detaljerad analys visade att BigQuery presterade särskilt bra under komplexa queries som involverade omfattande aggregering och flera join-operationer, vilka har en hög potential för att generera stora datamängder under shuffle-fasen. Det antogs att skillnaderna i latens delvis kunde tillskrivas BigQuery’s in-memory shuffle-kapacitet, medan Delta Lake riskerade att spilla data till disk. Delta Lakes hårdvaruanvändning stödde denna teori ytterligare, där toppar i minnesanvändning och skrivhastighet till disk sammanföll med queries som visade höga skillnader, samtidigt som CPU-användningen förblev låg. Detta mönster tyder på ett I/O-bundet system snarare än ett CPU-bundet, vilket möjligen förklarar de observerade prestandaskillnaderna. Framtida studier uppmuntras att explicit övervaka shuffle-operationer, med målet att mer noggrant koppla queries som uppvisar stora skillnader med dataspill under shuffle-fasen. Ytterligare forskning bör också inkludera större datamängdstorlekar; denna avhandling var begränsad till en maximal datamängdstorlek på 64 GB på grund av begränsade resurser.
|
204 |
Distributed Computing Solutions for High Energy Physics Interactive Data AnalysisPadulano, Vincenzo Eduardo 04 May 2023 (has links)
[ES] La investigación científica en Física de Altas Energías (HEP) se caracteriza por desafíos computacionales complejos, que durante décadas tuvieron que ser abordados mediante la investigación de técnicas informáticas en paralelo a los avances en la comprensión de la física. Uno de los principales actores en el campo, el CERN, alberga tanto el Gran Colisionador de Hadrones (LHC) como miles de investigadores cada año que se dedican a recopilar y procesar las enormes cantidades de datos generados por el acelerador de partículas. Históricamente, esto ha proporcionado un terreno fértil para las técnicas de computación distribuida, conduciendo a la creación de Worldwide LHC Computing Grid (WLCG), una red global de gran potencia informática para todos los experimentos LHC y del campo HEP. Los datos generados por el LHC hasta ahora ya han planteado desafíos para la informática y el almacenamiento. Esto solo aumentará con futuras actualizaciones de hardware del acelerador, un escenario que requerirá grandes cantidades de recursos coordinados para ejecutar los análisis HEP. La estrategia principal para cálculos tan complejos es, hasta el día de hoy, enviar solicitudes a sistemas de colas por lotes conectados a la red. Esto tiene dos grandes desventajas para el usuario: falta de interactividad y tiempos de espera desconocidos. En años más recientes, otros campos de la investigación y la industria han desarrollado nuevas técnicas para abordar la tarea de analizar las cantidades cada vez mayores de datos generados por humanos (una tendencia comúnmente mencionada como "Big Data"). Por lo tanto, han surgido nuevas interfaces y modelos de programación que muestran la interactividad como una característica clave y permiten el uso de grandes recursos informáticos.
A la luz del escenario descrito anteriormente, esta tesis tiene como objetivo aprovechar las herramientas y arquitecturas de la industria de vanguardia para acelerar los flujos de trabajo de análisis en HEP, y proporcionar una interfaz de programación que permite la paralelización automática, tanto en una sola máquina como en un conjunto de recursos distribuidos. Se centra en los modelos de programación modernos y en cómo hacer el mejor uso de los recursos de hardware disponibles al tiempo que proporciona una experiencia de usuario perfecta. La tesis también propone una solución informática distribuida moderna para el análisis de datos HEP, haciendo uso del software llamado ROOT y, en particular, de su capa de análisis de datos llamada RDataFrame. Se exploran algunas áreas clave de investigación en torno a esta propuesta. Desde el punto de vista del usuario, esto se detalla en forma de una nueva interfaz que puede ejecutarse en una computadora portátil o en miles de nodos informáticos, sin cambios en la aplicación del usuario. Este desarrollo abre la puerta a la explotación de recursos distribuidos a través de motores de ejecución estándar de la industria que pueden escalar a múltiples nodos en clústeres HPC o HTC, o incluso en ofertas serverless de nubes comerciales. Dado que el análisis de datos en este campo a menudo está limitado por E/S, se necesita comprender cuáles son los posibles mecanismos de almacenamiento en caché. En este sentido, se investigó un sistema de almacenamiento novedoso basado en la tecnología de almacenamiento de objetos como objetivo para el caché.
En conclusión, el futuro del análisis de datos en HEP presenta desafíos desde varias perspectivas, desde la explotación de recursos informáticos y de almacenamiento distribuidos hasta el diseño de interfaces de usuario ergonómicas. Los marcos de software deben apuntar a la eficiencia y la facilidad de uso, desvinculando la definición de los cálculos físicos de los detalles de implementación de su ejecución. Esta tesis se enmarca en el esfuerzo colectivo de la comunidad HEP hacia estos objetivos, definiendo problemas y posibles soluciones que pueden ser adoptadas por futuros investigadores. / [CA] La investigació científica a Física d'Altes Energies (HEP) es caracteritza per desafiaments computacionals complexos, que durant dècades van haver de ser abordats mitjançant la investigació de tècniques informàtiques en paral·lel als avenços en la comprensió de la física. Un dels principals actors al camp, el CERN, acull tant el Gran Col·lisionador d'Hadrons (LHC) com milers d'investigadors cada any que es dediquen a recopilar i processar les enormes quantitats de dades generades per l'accelerador de partícules. Històricament, això ha proporcionat un terreny fèrtil per a les tècniques de computació distribuïda, conduint a la creació del Worldwide LHC Computing Grid (WLCG), una xarxa global de gran potència informàtica per a tots els experiments LHC i del camp HEP. Les dades generades per l'LHC fins ara ja han plantejat desafiaments per a la informàtica i l'emmagatzematge. Això només augmentarà amb futures actualitzacions de maquinari de l'accelerador, un escenari que requerirà grans quantitats de recursos coordinats per executar les anàlisis HEP. L'estratègia principal per a càlculs tan complexos és, fins avui, enviar sol·licituds a sistemes de cues per lots connectats a la xarxa. Això té dos grans desavantatges per a l'usuari: manca d'interactivitat i temps de espera desconeguts. En anys més recents, altres camps de la recerca i la indústria han desenvolupat noves tècniques per abordar la tasca d'analitzar les quantitats cada vegada més grans de dades generades per humans (una tendència comunament esmentada com a "Big Data"). Per tant, han sorgit noves interfícies i models de programació que mostren la interactivitat com a característica clau i permeten l'ús de grans recursos informàtics. A la llum de l'escenari descrit anteriorment, aquesta tesi té com a objectiu aprofitar les eines i les arquitectures de la indústria d'avantguarda per accelerar els fluxos de treball d'anàlisi a HEP, i proporcionar una interfície de programació que permet la paral·lelització automàtica, tant en una sola màquina com en un conjunt de recursos distribuïts. Se centra en els models de programació moderns i com fer el millor ús dels recursos de maquinari disponibles alhora que proporciona una experiència d'usuari perfecta. La tesi també proposa una solució informàtica distribuïda moderna per a l'anàlisi de dades HEP, fent ús del programari anomenat ROOT i, en particular, de la seva capa d'anàlisi de dades anomenada RDataFrame. S'exploren algunes àrees clau de recerca sobre aquesta proposta. Des del punt de vista de l'usuari, això es detalla en forma duna nova interfície que es pot executar en un ordinador portàtil o en milers de nodes informàtics, sense canvis en l'aplicació de l'usuari. Aquest desenvolupament obre la porta a l'explotació de recursos distribuïts a través de motors d'execució estàndard de la indústria que poden escalar a múltiples nodes en clústers HPC o HTC, o fins i tot en ofertes serverless de núvols comercials. Atès que sovint l'anàlisi de dades en aquest camp està limitada per E/S, cal comprendre quins són els possibles mecanismes d'emmagatzematge en memòria cau. En aquest sentit, es va investigar un nou sistema d'emmagatzematge basat en la tecnologia d'emmagatzematge d'objectes com a objectiu per a la memòria cau. En conclusió, el futur de l'anàlisi de dades a HEP presenta reptes des de diverses perspectives, des de l'explotació de recursos informàtics i d'emmagatzematge distribuïts fins al disseny d'interfícies d'usuari ergonòmiques. Els marcs de programari han d'apuntar a l'eficiència i la facilitat d'ús, desvinculant la definició dels càlculs físics dels detalls d'implementació de la seva execució. Aquesta tesi s'emmarca en l'esforç col·lectiu de la comunitat HEP cap a aquests objectius, definint problemes i possibles solucions que poden ser adoptades per futurs investigadors. / [EN] The scientific research in High Energy Physics (HEP) is characterised by complex computational challenges, which over the decades had to be addressed by researching computing techniques in parallel to the advances in understanding physics. One of the main actors in the field, CERN, hosts both the Large Hadron Collider (LHC) and thousands of researchers yearly who are devoted to collecting and processing the huge amounts of data generated by the particle accelerator. This has historically provided a fertile ground for distributed computing techniques, which led to the creation of the Worldwide LHC Computing Grid (WLCG), a global network providing large computing power for all the experiments revolving around the LHC and the HEP field. Data generated by the LHC so far has already posed challenges for computing and storage. This is only going to increase with future hardware updates of the accelerator, which will bring a scenario that will require large amounts of coordinated resources to run the workflows of HEP analyses. The main strategy for such complex computations is, still to this day, submitting applications to batch queueing systems connected to the grid and wait for the final result to arrive. This has two great disadvantages from the user's perspective: no interactivity and unknown waiting times. In more recent years, other fields of research and industry have developed new techniques to address the task of analysing the ever increasing large amounts of human-generated data (a trend commonly mentioned as "Big Data"). Thus, new programming interfaces and models have arised that most often showcase interactivity as one key feature while also allowing the usage of large computational resources.
In light of the scenario described above, this thesis aims at leveraging cutting-edge industry tools and architectures to speed up analysis workflows in High Energy Physics, while providing a programming interface that enables automatic parallelisation, both on a single machine and on a set of distributed resources. It focuses on modern programming models and on how to make best use of the available hardware resources while providing a seamless user experience. The thesis also proposes a modern distributed computing solution to the HEP data analysis, making use of the established software framework called ROOT and in particular of its data analysis layer implemented with the RDataFrame class. A few key research areas that revolved around this proposal are explored. From the user's point of view, this is detailed in the form of a new interface to data analysis that is able to run on a laptop or on thousands of computing nodes, with no change in the user application. This development opens the door to exploiting distributed resources via industry standard execution engines that can scale to multiple nodes on HPC or HTC clusters, or even on serverless offerings of commercial clouds. Since data analysis in this field is often I/O bound, a good comprehension of what are the possible caching mechanisms is needed. In this regard, a novel storage system based on object store technology was researched as a target for caching.
In conclusion, the future of data analysis in High Energy Physics presents challenges from various perspectives, from the exploitation of distributed computing and storage resources to the design of ergonomic user interfaces. Software frameworks should aim at efficiency and ease of use, decoupling as much as possible the definition of the physics computations from the implementation details of their execution. This thesis is framed in the collective effort of the HEP community towards these goals, defining problems and possible solutions that can be adopted by future researchers. / Padulano, VE. (2023). Distributed Computing Solutions for High Energy Physics Interactive Data Analysis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/193104
|
205 |
Generating a Normalized Database Using Class NormalizationSudhindaran, Daniel Sushil 01 January 2017 (has links)
Relational databases are the most popular databases used by enterprise applications to store persistent data to this day. It gives a lot of flexibility and efficiency. A process called database normalization helps make sure that the database is free from redundancies and update anomalies. In a Database-First approach to software development, the database is designed first, and then an Object-Relational Mapping (ORM) tool is used to generate the programming classes (data layer) to interact with the database. Finally, the business logic code is written to interact with the data layer to persist the business data to the database. However, in modern application development, a process called Code-First approach evolved where the domain classes and the business logic that interacts with the domain classes are written first. Then an Object Relational Mapping (ORM) tool is used to generate the database from the domain classes. In this approach, since database design is not a concern, software programmers may ignore the process of database normalization altogether. To help software programmers in this process, this thesis takes the theory behind the five database normal forms (1NF - 5NF) and proposes Five Class Normal Forms (1CNF - 5CNF) that software programmers may use to normalize their domain classes. This thesis demonstrates that when the Five Class Normal Forms are applied manually to a class by a programmer, the resulting database that is generated from the Code-First approach is also normalized according to the rules of relational theory.
|
206 |
BlobSeer as a data-storage facility for clouds : self-Adaptation, integration, evaluation / Utilisation de BlobSeer pour le stockage de données dans les clouds : auto-adaptation, intégration, évaluationCarpen-Amarie, Alexandra 15 December 2011 (has links)
L’émergence de l’informatique dans les nuages met en avant de nombreux défis qui pourraient limiter l’adoption du paradigme Cloud. Tandis que la taille des données traitées par les applications Cloud augmente exponentiellement, un défi majeur porte sur la conception de solutions efficaces pour la gestion de données. Cette thèse a pour but de concevoir des mécanismes d’auto-adaptation pour des systèmes de gestion de données, afin qu’ils puissent répondre aux exigences des services de stockage Cloud en termes de passage à l’échelle, disponibilité et sécurité des données. De plus, nous nous proposons de concevoir un service de données qui soit à la fois compatible avec les interfaces Cloud standard dans et capable d’offrir un stockage de données à haut débit. Pour relever ces défis, nous avons proposé des mécanismes génériques pour l’auto-connaissance, l’auto-protection et l’auto-configuration des systèmes de gestion de données. Ensuite, nous les avons validés en les intégrant dans le logiciel BlobSeer, un système de stockage qui optimise les accès hautement concurrents aux données. Finalement, nous avons conçu et implémenté un système de fichiers s’appuyant sur BlobSeer, afin d’optimiser ce dernier pour servir efficacement comme support de stockage pour les services Cloud. Puis, nous l’avons intégré dans un environnement Cloud réel, la plate-forme Nimbus. Les avantages et les désavantages de l’utilisation du stockage dans le Cloud pour des applications réelles sont soulignés lors des évaluations effectuées sur Grid’5000. Elles incluent des applications à accès intensif aux données, comme MapReduce, et des applications fortement couplées, comme les simulations atmosphériques. / The emergence of Cloud computing brings forward many challenges that may limit the adoption rate of the Cloud paradigm. As data volumes processed by Cloud applications increase exponentially, designing efficient and secure solutions for data management emerges as a crucial requirement. The goal of this thesis is to enhance a distributed data-management system with self-management capabilities, so that it can meet the requirements of the Cloud storage services in terms of scalability, data availability, reliability and security. Furthermore, we aim at building a Cloud data service both compatible with state-of-the-art Cloud interfaces and able to deliver high-throughput data storage. To meet these goals, we proposed generic self-awareness, self-protection and self-configuration components targeted at distributed data-management systems. We validated them on top of BlobSeer, a large-scale data-management system designed to optimize highly-concurrent data accesses. Next, we devised and implemented a BlobSeer-based file system optimized to efficiently serve as a storage backend for Cloud services. We then integrated it within a real-world Cloud environment, the Nimbus platform. The benefits and drawbacks of using Cloud storage for real-life applications have been emphasized in evaluations that involved data-intensive MapReduce applications and tightly-coupled, high-performance computing applications.
|
207 |
Nanostructuration par laser femtoseconde dans un verre photo-luminescentBellec, Matthieu 05 November 2009 (has links)
L'objet de cette thèse est l'étude de l'interaction d'un laser femtoseconde avec un support photosensible particulier: un verre phosphate dopé à l'argent appelé verre photo-luminescent (PL). Une nouvelle approche permettant de réaliser en trois dimensions dans un verre PL des nanostructures d'argent aux dimensions bien inférieures à la limite de diffraction est tout d'abord présentée. La mesure des propriétés optiques et structurales pour différentes échelles (spatiales et temporelles) a permis de proposer un mécanisme de formation des structures photo-induites qui est basé sur un jeu subtil entre les phénomènes d’absorption non-linéaire et de thermo-diffusion. La deuxième partie de cette thèse sera orientée sur les propriétés optiques (linéaires et non-linéaires) et les applications des ces nanostructures d’argent. En particulier, l’exaltation des propriétés non-linéaires des agrégats d’argent sera exploitée pour stocker optiquement de l’information en trois dimentions. / The aim of this work is the study of the interaction between a femtosecond laser and a special photosensitive medium: a silver containing phosphate glass, also called photo-luminescent (PL) glass. A new approach allowing to write, inside the PL glass, 3D silver nanostructures with feature size down to the diffraction limit is presented. Based on optical and structural measurments at different time and spacial scales, the mechanism of the formation of these nanostructures is described. A subtle interplay between nonlinear absorption and thermo-diffusion effects is found to be the key of the mechanism. The second part of this work relies on the optical properties (linear and nonlinear) and few applications of the silver nanostructures. More particulary, the enhancement of their nonlinear properties is used for three dimentional optical data storage.
|
208 |
A Dredging Knowledge-Base Expert System for Pipeline Dredges with Comparison to Field DataWilson, Derek Alan 2010 December 1900 (has links)
A Pipeline Analytical Program and Dredging Knowledge{Base Expert{System
(DKBES) determines a pipeline dredge's production and resulting cost and schedule.
Pipeline dredge engineering presents a complex and dynamic process necessary to
maintain navigable waterways. Dredge engineers use pipeline engineering and slurry
transport principles to determine the production rate of a pipeline dredge system.
Engineers then use cost engineering factors to determine the expense of the dredge
project.
Previous work in engineering incorporated an object{oriented expert{system to
determine cost and scheduling of mid{rise building construction where data objects
represent the fundamental elements of the construction process within the program
execution. A previously developed dredge cost estimating spreadsheet program which
uses hydraulic engineering and slurry transport principles determines the performance
metrics of a dredge pump and pipeline system. This study focuses on combining
hydraulic analysis with the functionality of an expert{system to determine the performance
metrics of a dredge pump and pipeline system and its resulting schedule.
Field data from the U.S. Army Corps of Engineers pipeline dredge, Goetz, and
several contract daily dredge reports show how accurately the DKBES can predict
pipeline dredge production. Real{time dredge instrumentation data from the Goetz compares the accuracy of the Pipeline Analytical Program to actual dredge operation.
Comparison of the Pipeline Analytical Program to pipeline daily dredge reports
shows how accurately the Pipeline Analytical Program can predict a dredge project's
schedule over several months. Both of these comparisons determine the accuracy
and validity of the Pipeline Analytical Program and DKBES as they calculate the
performance metrics of the pipeline dredge project.
The results of the study determined that the Pipeline Analytical Program compared
closely to the Goetz eld data where only pump and pipeline hydraulics a ected
the dredge production. Results from the dredge projects determined the Pipeline Analytical
Program underestimated actual long{term dredge production. Study results
identi ed key similarities and di erences between the DKBES and spreadsheet program
in terms of cost and scheduling. The study then draws conclusions based on
these ndings and o ers recommendations for further use.
|
209 |
Design and Pilot Study of an Arizona Water Information SystemFoster, K. E., Johnson, J. D. 06 May 1972 (has links)
From the Proceedings of the 1972 Meetings of the Arizona Section - American Water Resources Assn. and the Hydrology Section - Arizona Academy of Science - May 5-6, 1972, Prescott, Arizona / Water information systems may have different demands, such as responding to queries about rainfall-runoff relationships, water level data, water quality data and water use. Data required for retrieval may need display, such as a hydrograph. Information systems are reviewed and results of specific water information agencies are reported. Agencies in Arizona are listed with their specific water information need. Development of a water activity file and water information system is outlined for Arizona as a pilot project. Linkage of units within the data system is shown, as is the information system's questionnaire to project leaders. Information currently in the system includes water quality from the state department of health for 450 wells in the Tucson basin, and water level, storage, storage coefficient and transmissivity supplied by the Arizona water commission for the Tucson basin and Avra Valley. Quality of data submitted to the system should be reflected in retrieval for better understanding of the data. This consideration is planned for the coming fiscal year.
|
210 |
New Algorithms for Macromolecular Structure Determination / Neue Algorithmen zur Strukturbestimmung von MakromolekülenHeisen, Burkhard Clemens 08 September 2009 (has links)
No description available.
|
Page generated in 0.0695 seconds