441 |
No School Left Behind: Oakland Unified School District Discipline Reform and Policy Implementation Case StudySegura Betancourt, Maria Alejandra 22 June 2023 (has links)
This paper critically evaluates school discipline reform policy and implementation by California in the Oakland Unified School District after the U.S. Department of Education, Office for Civil Rights investigation. It demonstrates that policy implementation at the school level is equally as important as policy building and reform at the state-and district level. The Oakland Unified School district was subject to many reforms at the district level through change in state-wide legislation, and school board reform after the investigation concluded with several recommendations for the district. This provides a unique opportunity to study policy implementation at the school-level to understand how school environment and discretion may affect reform implementation. As research surrounding the effects of punitive school discipline continue to support alternative discipline practices, many states and school-districts have begun to implement its own reform. However, school discretion on how these policies are implemented call for researchers to focus on the school-level of policy implementation. This thesis is motivated to create an understanding in how policy implementation at the state and district level will differ across schools in the same district, focusing on school environment can influence implementation. / Master of Arts / This paper evaluates policy implementation in a California School District as a school-level. In 2012, the Department of Education Office for Civil Rights conducted an investigation in California's Oakland Unified School District on reports of the district subjugating students of minority status to harsher punitive punishment than those of their white peers. The Office for Civil Rights found evidence to support this claim and suggested many disciplines policy and practices reform to the district, which the district began to implement throughout its schools. This paper focuses on reviewing state-wide and district-wide discipline reform by comparing two high schools who experienced a difference in suspensions after reform was implemented. I offer insight into policy implementation by focusing on school environment through mission and vision statements. I perform my analysis through a comparative case study analysis of the two schools as well as content analysis of the state policy and district level policies and practices discussing school discipline. This paper emphasizes that school policy reform at the state and district level is important, however; policy implementation at the school-level ultimately creates change and is affected by school environment.
|
442 |
Domain Specific Language (DSL) visualisation for Big Data PipelinesMitrovic, Vlado January 2024 (has links)
With the grow of big data technologies, it has become challenging to design and manage complex data workflow, especially for non technical person. However, in order to understand and process these data the best way, we need to rely on domain expert who are often not familiar with tools available on the market. This thesis discovers the needs and describe the implementation of an easy to use tool to define and visualise data processing workflow. The research methodology includes the definition of customer requirements, architecture design, prototype development and user testing. The iterative approach used in this project ensure continuous improvement based on users feedback. The final solution then assessed using KPI metrics such as usability, integration, performances and support. / Med den växande big data-tekniken har det blivit en utmaning att utforma och hantera komplexa dataarbetsflöden, särskilt för icke-tekniska personer. För att förstå och bearbeta dessa data på bästa sätt måste vi dock förlita oss på domänexperter som ofta inte är bekanta med de verktyg som finns tillgängliga på marknaden. Denna avhandling identifierar behoven och beskriver implementeringen av ett lättanvänt verktyg för att definiera och visualisera arbetsflödet för databehandling. Detta genom att abstrahera de tekniska krav som krävs av andra lösningar. Forskningsmetoden omfattar definition av kundkrav, arkitekturdesign, prototyputveckling och användartestning. Det iterativa tillvägagångssätt som används i detta projekt säkerställer kontinuerlig förbättring baserat på användarnas feedback. Den slutliga lösningen utvärderas sedan med hjälp av nyckeltal som användbarhet, integration, prestanda och support.
|
443 |
NEUTRON SCATTERING STUDIES OF CRUDE OIL VISCOSITY REDUCTION WITH ELECTRIC FIELDDu, Enpeng January 2015 (has links)
Small-angle neutron scattering (SANS) is a very powerful laboratory technique for micro structure research which is similar to the small angle X-ray scattering (SAXS) and light scattering for microstructure investigations in various materials. In small-angle neutron scattering (SANS) technique, the neutrons are elastically scattered by changes of refractive index on a nanometer scale inside the sample through the interaction with the nuclei of the atoms present in the sample. Because the nuclei of all atoms are compact and of comparable size, neutrons are capable of interacting strongly with all atoms. This is in contrast to X-ray techniques where the X-rays interact weakly with hydrogen, the most abundant element in most samples. The SANS refractive index is directly related to the scattering length density and is a measure of the strength of the interaction of a neutron wave with a given nucleus. It can probe inhomogeneities in the nanometer scale from 1nm to 1000nm. Since the SANS technique probes the length scale in a very useful range, this technique provides valuable information over a wide variety of scientific and technological applications, including chemical aggregation, defects in materials, surfactants, colloids, ferromagnetic correlations in magnetism, alloy segregation, polymers, proteins, biological membranes, viruses, ribosome and macromolecules. Quoting the Nobel committee, when awarding the prize to C. Shull and B. Brockhouse in 1994: “Neutrons tell you where the atoms are and what the atoms do”. At NIST, there is a single beam of neutrons generated from either reactor or pulsed neutron source and selected by velocity selector. The beam passes through a neutron guide then scattered by the sample. After the sample chamber, there are 2D gas detectors to collect the elastic scattering information. SANS usually uses collimation of the neutron beam to determine the scattering angle of a neutron, which results in an even lower signal-to-noise ratio for data that contains information on the properties of a sample. We can analyze the data acquisition from the detectors and get the information on size, shape, etc. This is why we choose SANS as our research tool. The world’s top energy problems are security concerns, climate concerns and environmental concerns. So far, oil (37%) is still the No.1 fuel in world energy consumption (Oil 37%, Coal 25%, Bio-fuels 0.2%, Gas 23%, Nuclear 6%, Biomass 4%, Hydro 3%, Solar heat 0.5%, Wind 0.3%, Geothermal 0.2% and Solar photovoltaic 0.04%). Even more and more alternative energy: bio-fuels, nuclear and solar energy will be used in the future, but nuclear energy has a major safety issue after the Japanese Fukushima I nuclear accidents, and other energies contribute only a small percent. Thus, it is very important to improve the efficiency and reduce the population of petroleum products. There is probably one thing that we can all agree on: the world’s energy reserves are not unlimited. Even though it is limited, only 30% of the oil reserves is conventional oil, so in order to produce, transport, and refine of heavy crude oil without wasting huge amounts of energy, we need to reduce the viscosity without using high temperature stream heating or diluent; As more and more off-shore oil is exploited at that we need reduce the viscosity without increasing temperature. The whole petroleum consumed in U.S. in 2009 was 18.7 million barrels per day and 35% of all the energy we consumed. Diesel is one of the very important fossil fuel which is about 20% of petroleum consumed. Most of the world's oils are non-conventional, 15 % of heavy oil, 25 % of extra heavy oil, 30 % of the oil sands and bitumen, and the conventional oil reserves is only 30%. The oil sand is closely related to the heavy crude oil, the main difference being that oil sands generally do not flow at all. For efficient energy production and conservation, how to lower the liquated fuel and crude oil viscosity is a very important topic. Dr. Tao with his group at Temple University, using his electro or magnetic rheological viscosity theory has developed a new technology, which utilizes electric or magnetic fields to change the rheology of complex fluids to reduce the viscosity, while keeping the temperature unchanged. After we successfully reduced the viscosity of crude oil with field and investigated the microstructure changing in various crude oil samples with SANS, we have continued to reduce the viscosity of heavy crude oil, bunker diesel, ultra low sulfur diesel, bio-diesel and crude oil and ultra low temperature with electric field treatment. Our research group developed the viscosity electrorheology theory and investigated flow rate with laboratory and field pipeline. But we never visualize this aggregation. The small angle neutron scattering experiment has confirmed the theoretical prediction that a strong electric field induces the suspended nano-particles inside crude oil to aggregate into short chains along the field direction. This aggregation breaks the symmetry, making the viscosity anisotropic: along the field direction, the viscosity is significantly reduced. The experiment enables us to determine the induced chain size and shape, verifies that the electric field works for all kinds of crude oils, paraffin-based, asphalt-based, and mix-based. The basic physics of such field induced viscosity reduction is applicable to all kinds of suspensions. / Physics
|
444 |
Transient phenomena during the emptying process of water in pressurized pipelinesCoronado Hernández, Óscar Enrique 04 April 2020 (has links)
Tesis por compendio / [ES] El análisis de los fenómenos transitorios durante las operaciones de llenado en conducciones de agua ha sido estudiado de manera detallada comparado con las maniobras de vaciado. En este último se encontró que no existen modelos matemáticos capaces de predecir el fenómeno. Esta investigación inicia estudiando el fenómeno transitorio generado durante el vaciado en una tubería simple, como paso previo para entender el comportamiento de las variables hidráulicas y termodinámicas durante el vaciado de agua en conducciones presurizadas de perfil irregular. Los análisis son realizados considerando dos situaciones: (i) la situación No. 1 corresponde al caso donde no hay válvulas de aire instaladas o cuando éstas han fallado por problemas operacionales o de mantenimiento, que representa la condición más desfavorable con respecto a la depresión máxima alcanzada; y (ii) la situación No. 2 corresponde al caso en donde se han instalado válvulas de aire en los puntos más elevados de la conducción para dar fiabilidad mediante el aire introducido al sistema previniendo de esta manera la depresión máxima.
En esta tesis doctoral se ha desarrollado un modelo matemático para predecir el comportamiento de las operaciones de vaciado. El modelo matemático es propuesto para las dos situaciones mencionadas anteriormente. La fase líquida (agua) es simulada con un modelo de columna rígida, en el cual se desprecia la elasticidad del agua y de la tubería debido a que la elasticidad del aire es mucho mayor que estas; y la interfaz aire-agua es modelada con un modelo de flujo pistón, el cual asume que la columna de agua es perpendicular a la dirección principal del flujo. La fase de aire es modelada usando tres ecuaciones: (a) un modelo politrópico basado en el comportamiento energético, que considera la expansión de las bolsas de aire; (b) la formulación de las válvulas de aire para cuantificar la magnitud del caudal de aire admitido; y (c) la ecuación de continuidad de la bolsa de aire. Un sistema ordinario de ecuaciones diferenciales es solucionado utilizando la herramienta de Simulink de Matlab.
El modelo matemático es validado empleando bancos experimentales localizados en los laboratorios de hidráulica de la Universitat Politècnica de València (Valencia, España) y en el Instituto Superior Técnico de la Universidad de Lisboa (Lisboa, Portugal). Los resultados muestran que el modelo matemático predice adecuadamente los datos experimentales de las presiones de las bolsas de aire, las velocidades del agua y las longitudes de las columnas de agua.
Finalmente, el modelo matemático es aplicado a un caso de estudio para mostrar su aplicabilidad a situaciones prácticas, con el fin de poder ser empleado por ingenieros para estudiar el fenómeno en conducciones reales y así tomar decisiones acerca de la planificación de esta operación. / [CA] L'anàlisi dels fenòmens transitoris durant les operacions d'ompliment en conduccions d'aigua ha sigut estudiat de manera detallada comparat amb les maniobres de buidatge. En este últim es va trobar que no hi ha models matemàtics capaços de predir el fenomen. Esta investigació inicia estudiant el fenomen transitori generat durant el buidatge en una canonada simple, com a pas previ per a entendre el comportament de les variables hidràuliques i termodinàmiques durant el buidatge d'aigua en conduccions pressuritzades de perfil irregular. Les anàlisis són realitzats considerant dos situacions: (i) la situació No. 1 correspon al cas on no hi ha vàlvules d'aire instal·lades o quan estes han fallat per problemes operacionals o de manteniment, que representa la condició més desfavorable respecte a la depressió màxima aconseguida; i (ii) la situació No. 2 correspon al cas on s'han instal·lat vàlvules d'aire en els punts més elevats de la conducció per a donar fiabilitat per mitjà de l'aire introduït al sistema prevenint d'esta manera la depressió màxima.
En esta tesi doctoral s'ha desenrotllat un model matemàtic per a predir el comportament de les operacions de buidatge. El model matemàtic és proposat per a les dos situacions mencionades anteriorment. La fase líquida (aigua) és simulada amb un model de columna rígida, en el qual es desprecia l'elasticitat de l'aigua i de la canonada pel fet que l'elasticitat de l'aire és molt major que estes; i la interfície aire-aigua és modelada amb un model de flux pistó, el qual assumix que la columna d'aigua és perpendicular a la direcció principal del flux. La fase d'aire és modelada usant tres equacions: (a) un model politròpic basat en el comportament energètic, que considera l'expansió de les bosses d'aire; (b) la formulació de les vàlvules d'aire per a quantificar la magnitud del cabal d'aire admés; i (c) l'equació de continuïtat de la bossa d'aire. Un sistema ordinari d'equacions diferencials és solucionat utilitzant la ferramenta de Simulink de Matlab.
El model matemàtic és validat emprant bancs experimentals localitzats en els laboratoris d'hidràulica de la Universitat Politècnica de València (València, Espanya) i en l'Institut Superior Tècnic de la Universitat de Lisboa (Lisboa, Portugal). Els resultats mostren que el model matemàtic prediu adequadament les dades experimentals de les pressions de les bosses d'aire, les velocitats de l'aigua i les longituds de les columnes d'aigua.
Finalment, el model matemàtic és aplicat a un cas d'estudi per a mostrar la seua aplicabilitat a situacions pràctiques, a fi de poder ser empleat per enginyers per a estudiar el fenomen en conduccions reals i així prendre decisions sobre la planificació d'esta operació. / [EN] The analysis of transient phenomena during water filling operations in pipelines of irregular profiles has been studied much more compared to emptying maneuvers. In the literature, there is a lack of knowledge about mathematical models of emptying operations. This research starts with the analysis of a transient phenomenon during emptying maneuvers in single pipelines, which is a previous stage to understand the emptying operation in pipelines of irregular profiles. Analysis are conducted under two typical situations: (i) one corresponding to either the situation where there are no air valves installed or when they have failed due to operational and maintenance problems which represents the worse condition due to causing the lowest troughs of subatmospheric pressure, and (ii) the other one corresponding to the situation where air valves have been installed at the highest point of hydraulic installations to give reliability by admitting air into the pipelines for preventing troughs of subatmospheric pressure.
Particularly, this research developed a mathematical model to predict the behavior of the emptying operations. The mathematical model is proposed for the two aforementioned situations. The liquid phase (water) is simulated using a rigid water column model (RWCM), which neglects the pipe and water elasticity given that the elasticity of the entrapped air pockets is much higher than the one from the pipe and the water. The air-water interface is simulated with a piston flow model assuming that the water column is perpendicular with the main direction of the flow. Gas phase is modeled using three formulations: (a) a polytropic model based on its energetic behavior, which considers an expansion of air pockets; (b) an air valve characterization to quantify the magnitude of admitted air flow; and (c) a continuity equation of the air. An ordinary differential equations system is solved using the Simulink tool of Matlab.
The proposed model has been validated using experimental facilities at the hydraulic laboratories of the Universitat Politècnica de València, Valencia, Spain, and the Instituto Superior Técnico, University of Lisbon, Lisbon, Portugal. The results show how the mathematical model adequately predicts the experimental data, including the pressure oscillation patterns, the water velocities, and the lengths of the water columns.
Finally, the mathematical model is applied to a case study to show a practical application, which can be used for engineers to study the phenomenon in real pipelines to make decisions about performing of the emptying operation. / Coronado Hernández, ÓE. (2019). Transient phenomena during the emptying process of water in pressurized pipelines [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/120024 / Compendio
|
445 |
Defining Defiance: African-American Middle School Students’ Perspective on the Impact of Teachers’ Disciplinary ReferralsRay, Patricia 18 March 2016 (has links) (PDF)
The purpose of this study is to understand how African-American males enrolled in middle school in Los Angeles County experienced and understood the application of the California educational code regarding discipline. Disproportionate numbers of African-American students are being suspended and expelled from public schools. This overreliance on exclusionary punishment has led to the School-to-Prison Pipeline, and the statistics related to suspension rates from school mirror that of the criminal justice system. This study captures the voices of students who are consistently referred to the office by classroom teachers in order to understand how they perceive and articulate their experiences with the school disciplinary process and how those experiences impact their academic and personal lives. Findings indicate that participants want to do well in school. The participants described many of the behaviors that triggered an office referral as trivial, such as being tardy to class, talking, or not doing their work. When their infractions were more serious, students stated that they acted out because the teacher had disrespected or antagonized them. More than anything, participants want teachers to listen to them and to respect them, and they want to be active participants in their learning.
|
446 |
OPTIMIZING MACHINE LEARNING PIPELINES FOR MODEL PERFORMANCETejendra Pratap Singh (19348627) 10 December 2024 (has links)
<p dir="ltr">Data pipelines are core machine learning components essential for moving data through various stages and applying transformations to enhance data quality for model training, thereby improving performance and efficiency. However, as data volumes grow, optimizing these pipelines becomes increasingly complex, which can impact performance and increase the costs of finding the optimal pipeline. Data-centric systems are found across various sectors, including finance, education, marketing, and healthcare, which are trained on historical data. After that, systems need to be monitored, and continuous testing is required to ensure the performance of new incoming data. However, when the system encounters failures with new incoming data, debugging is needed to find the data point that is causing the system to fail. Finding the optimal pipeline for new data can also be daunting. In this research, we aim to address these challenges by proposing an approach that uses the GRASP method to find the new pipeline and a data profile to find the cause of the disconnect between the pipeline and data.</p>
|
447 |
Effect of Installation Practices on Galvanic Corrosion in Service Lines, Low Flow Rate Sampling for Detecting Water-Lead Hazards, and Trace Metals on Drinking Water Pipeline Corrosion: Lessons in Unintended ConsequencesClark, Brandi Nicole 17 April 2015 (has links)
Corrosion of drinking water distribution systems can cost water utilities and homeowners tens of billions of dollars each year in infrastructure damage, adversely impacting public health and causing water loss through leaks. Often, seemingly innocuous choices made by utilities, plumbers, and consumers can have a dramatic impacts on corrosion and pipeline longevity.
This work demonstrated that brass pipe connectors used in partial lead service line replacements (PLSLR) can significantly influence galvanic corrosion between lead and copper pipes. Galvanic crevice corrosion was implicated in a fourfold increase in lead compared to a traditional direct connection, which was previously assumed to be a worst-case connection method.
In field sampling conducted in two cities, a new sampling method designed to detect particulate lead risks demonstrated that the choice of flow rate has a substantial impact on lead-in-water hazards. On average, lead concentrations detected in water at high flow without stagnation were at least 3X-4X higher than in traditional regulatory samples with stagnation, demonstrating a new 'worst case' lead release scenario due to detachment of lead particulates.
Although galvanized steel was previously considered a minor lead source, it can contain up to 2% lead on the surface, and elevated lead-in-water samples from several cities were traced to galvanized pipe, including the home of a child with elevated blood lead.
Furthermore, if both galvanized and copper pipe are present, as occurs in large buildings, deposition corrosion is possible, leading to both increased lead exposure and pipe failures in as little as two years. Systematic laboratory studies of deposition corrosion identified key factors that increase or decrease its likelihood; soluble copper concentration and flow pattern were identified as controlling factors. Because of the high copper concentrations and continuous flow associated with mixed-metal hot water recirculating systems, these systems were identified as a worst-case scenario for galvanic corrosion.
Deposition corrosion was also confirmed as a contributing mechanism to increased lead release, if copper pipe is placed before a lead pipe as occurs in partial service line replacements. Dump-and-fill tests confirmed copper solubility as a key factor in deposition corrosion impacts, and a detailed analysis of lead pipes from both laboratory studies and field tests was consistent with pure metallic copper deposits on the pipe surface, especially near the galvanic junction with copper.
Finally, preliminary experiments were conducted to determine whether nanoparticles from novel water treatment techniques could have a negative impact on downstream drinking water pipeline infrastructure. Although increases in the corrosion of iron, copper, and stainless steel pipes in the presence of silver and carbon nanomaterials were generally small or non-existent, in one case the presence of silver nanoparticles increased iron release from stainless steel by more than 30X via a localized corrosion mechanism, with pitting rates as high as 1.2 mm/y, implying serious corrosion consequences are possible for stainless steel pipes if nanoparticles are present. / Ph. D.
|
448 |
Directive-Based Data Partitioning and Pipelining and Auto-Tuning for High-Performance GPU ComputingCui, Xuewen 15 December 2020 (has links)
The computer science community needs simpler mechanisms to achieve the performance potential of accelerators, such as graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and co-processors (e.g., Intel Xeon Phi), due to their increasing use in state-of-the-art supercomputers. Over the past 10 years, we have seen a significant improvement in both computing power and memory connection bandwidth for accelerators. However, we also observe that the computation power has grown significantly faster than the interconnection bandwidth between the central processing unit (CPU) and the accelerator.
Given that accelerators generally have their own discrete memory space, data needs to be copied from the CPU host memory to the accelerator (device) memory before computation starts on the accelerator. Moreover, programming models like CUDA, OpenMP, OpenACC, and OpenCL can efficiently offload compute-intensive workloads to these accelerators. However, achieving the overlapping of data transfers with computation in a kernel with these models is neither simple nor straightforward. Instead, codes copy data to or from the device without overlapping or requiring explicit user design and refactoring.
Achieving performance can require extensive refactoring and hand-tuning to apply data transfer optimizations, and users must manually partition their dataset whenever its size is larger than device memory, which can be highly difficult when the device memory size is not exposed to the user. As the systems are becoming more and more complex in terms of heterogeneity, CPUs are responsible for handling many tasks related to other accelerators, computation and data movement tasks, task dependency checking, and task callbacks. Leaving all logic controls to the CPU not only costs extra communication delay over the PCI-e bus but also consumes the CPU resources, which may affect the performance of other CPU tasks. This thesis work aims to provide efficient directive-based data pipelining approaches for GPUs that tackle these issues and improve performance, programmability, and memory management. / Doctor of Philosophy / Over the past decade, parallel accelerators have become increasingly prominent in this emerging era of "big data, big compute, and artificial intelligence.'' In more recent supercomputers and datacenter clusters, we find multi-core central processing units (CPUs), many-core graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and co-processors (e.g., Intel Xeon Phi) being used to accelerate many kinds of computation tasks.
While many new programming models have been proposed to support these accelerators, scientists or developers without domain knowledge usually find existing programming models not efficient enough to port their code to accelerators. Due to the limited accelerator on-chip memory size, the data array size is often too large to fit in the on-chip memory, especially while dealing with deep learning tasks. The data need to be partitioned and managed properly, which requires more hand-tuning effort. Moreover, performance tuning is difficult for developers to achieve high performance for specific applications due to a lack of domain knowledge. To handle these problems, this dissertation aims to propose a general approach to provide better programmability, performance, and data management for the accelerators. Accelerator users often prefer to keep their existing verified C, C++, or Fortran code rather than grapple with the unfamiliar code.
Since 2013, OpenMP has provided a straightforward way to adapt existing programs to accelerated systems. We propose multiple associated clauses to help developers easily partition and pipeline the accelerated code. Specifically, the proposed extension can overlap kernel computation and data transfer between host and device efficiently. The extension supports memory over-subscription, meaning the memory size required by the tasks could be larger than the GPU size. The internal scheduler guarantees that the data is swapped out correctly and efficiently. Machine learning methods are also leveraged to help with auto-tuning accelerator performance.
|
449 |
Computational Analysis of Viruses in Metagenomic DataTithi, Saima Sultana 24 October 2019 (has links)
Viruses have huge impact on controlling diseases and regulating many key ecosystem processes. As metagenomic data can contain many microbiomes including many viruses, by analyzing metagenomic data we can analyze many viruses at the same time. The first step towards analyzing metagenomic data is to identify and quantify viruses present in the data. In order to answer this question, we developed a computational pipeline, FastViromeExplorer. FastViromeExplorer leverages a pseudoalignment based approach, which is faster than the traditional alignment based approach to quickly align millions/billions of reads. Application of FastViromeExplorer on both human gut samples and environmental samples shows that our tool can successfully identify viruses and quantify the abundances of viruses quickly and accurately even for a large data set.
As viruses are getting increased attention in recent times, most of the viruses are still unknown or uncategorized. To discover novel viruses from metagenomic data, we developed a computational pipeline named FVE-novel. FVE-novel leverages a hybrid of both reference based and de novo assembly approach to recover novel viruses from metagenomic data. By applying FVE-novel to an ocean metagenome sample, we successfully recovered two novel viruses and two different strains of known phages.
Analysis of viral assemblies from metagenomic data reveals that viral assemblies often contain assembly errors like chimeric sequences which means more than one viral genomes are incorrectly assembled together. In order to identify and fix these types of assembly errors, we developed a computational tool called VirChecker. Our tool can identify and fix assembly errors due to chimeric assembly. VirChecker also extends the assembly as much as possible to complete it and then annotates the extended and improved assembly. Application of VirChecker to viral scaffolds collected from an ocean meatgenome sample shows that our tool successfully fixes the assembly errors and extends two novel virus genomes and two strains of known phage genomes. / Doctor of Philosophy / Virus, the most abundant micro-organism on earth has a profound impact on human health and environment. Analyzing metagenomic data for viruses has the beneFIt of analyzing many viruses at a time without the need of cultivating them in the lab environment. Here, in this dissertation, we addressed three research problems of analyzing viruses from metagenomic data. To analyze viruses in metagenomic data, the first question needs to answer is what viruses are there and at what quantity. To answer this question, we developed a computational pipeline, FastViromeExplorer. Our tool can identify viruses from metagenomic data and quantify the abundances of viruses present in the data quickly and accurately even for a large data set. To recover novel virus genomes from metagenomic data, we developed a computational pipeline named FVE-novel. By applying FVE-novel to an ocean metagenome sample, we successfully recovered two novel viruses and two strains of known phages. Examination of viral assemblies from metagenomic data reveals that due to the complex nature of metagenome data, viral assemblies often contain assembly errors and are incomplete. To solve this problem, we developed a computational pipeline, named VirChecker, to polish, extend and annotate viral assemblies. Application of VirChecker to virus genomes recovered from an ocean metagenome sample shows that our tool successfully extended and completed those virus genomes.
|
450 |
Methodology to Enhance the Reliability of Drinking Water Pipeline Performance AnalysisPatel, Pruthvi Shaileshkumar 25 July 2018 (has links)
Currently, water utilities are facing monetary crises to maintain and expand services to meet the current as well as the future demands. Standard practice in pipeline infrastructure asset management is to collect data and predict the condition of pipelines using models and tools. Water utilities want to be proactive in fixing or replacing the pipes as fixing-when-it-fails ideology leads to increased cost and can affect environmental quality and societal health.
There is a number of modeling techniques available for assessing the condition of the pipelines, but there is a massive shortage of methods to check the reliability of the results obtained using different modeling techniques. It is mainly because of the limited data one utility collects and absence of piloting of these models at various water utilities.
In general, water utilities feel confident about their in-house condition prediction and failure models but are willing to utilize a reliable methodology which can overcome the issues related to the validation of the results. This paper presents the methodology that can enhance the reliability of model results for water pipeline performance analysis which can be used to parallel the output of the real system with confidence. The proposed methodology was checked using the dataset of two large water utilities and was found that it can potentially help water utilities gain confidence in their analyses results by statistically signifying the results. / Master of Science / Water utilities are facing monetary crises to maintain and expand services to meet the current as well as the future demands. Standard practice in pipeline infrastructure asset management is to collect data and predict the condition of pipelines using models and tools. There is a number of modeling techniques available for assessing the condition of the pipelines, but there is a massive shortage of methods to check the reliability of the results obtained using different modeling techniques. This study presents the methodology that can enhance the reliability of model results for water pipeline performance analysis which can potentially help water utilities to be proactive in fixing or replacing the pipelines with confidence. Different types of analyses on the data received from the two large water utilities (name confidential) were performed to understand and check the application of the proposed methodology in the real world and was found that it can potentially help water utilities gain confidence in their analyses results by statistically signifying the result
|
Page generated in 0.0185 seconds