• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 8
  • 8
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Factores determinantes de las exportaciones de aceite de palma: tecnología, capacitación y calidad de la Región Ucayali durante el periodo 2013-2018 / Determining factors of palm oil exports: technology, training and quality of ucayali region during the period 2013-2018

Alvarado Aguilar, Arisa, Ballarte Beraún, Norlit Gabriela 19 July 2020 (has links)
Las exportaciones de Aceite de Palma producido en los principales países productores a nivel mundial registran una tendencia creciente, lo cual indica que este producto cuenta con gran perspectiva de desarrollo. Además, a nivel Latinoamérica la producción y exportación de aceite de Palma ha incrementado, siendo liderada por Ecuador y Colombia. Asimismo, Perú cuenta con superficies altamente potenciales para la producción de Palma en la Amazonia y climas adecuados para el desarrollo, por lo que la producción de aceite de palma en la Región Ucayali representa nuevos retos para lograr un desarrollo sostenible. Esto se debe a que Ucayali se encuentra entre las Regiones que produce mayor cantidad de aceite de palma para la exportación. Por lo que, en el siguiente trabajo analizaremos los factores: tecnología, capacitación y calidad, que influyen en el incremento de las exportaciones de aceite de palma. Mediante una metodología de enfoque cualitativo, se analiza el impacto de cada uno de los factores mencionados en el incremento de las exportaciones de aceite de palma. El principal resultado desde el enfoque cualitativo fue el factor calidad y su gran impacto positivo en el incremento de las exportaciones de aceite de palma en los últimos años, ya que, debido a este factor se presenta la oportunidad de incrementar los volúmenes de exportación e ingresar a más mercados. Sin embargo, los factores tecnología y capacitación se encuentran ligados al factor calidad, ya que para obtener una mejor calidad de producto es necesario el manejo de tecnología y personal capacitado. / Exports of Palm Oil produced in the main producing countries worldwide register an increasing trend, which indicates that this product has a great development perspective. In addition, at Latin American level the production and export of palm oil has increased, being led by Ecuador and Colombia. Likewise, Peru has highly potential areas for palm production in the Amazon and suitable climates for development, so the production of palm oil in Ucayali Region represents new challenges to achieve sustainable development. This is because Ucayali is among the Regions that produces the largest quantity of palm oil for export. Therefore, in this investigation we will analyze the factors: technology, training and quality, that influence the increase in palm oil exports. Using a qualitative approach methodology, the impact of each of the factors mentioned in the increase in palm oil exports is analyzed. The main result from the qualitative approach was the quality factor and its great positive impact on the increase in palm oil exports in recent years. This factor represents the opportunity to increase export volumes and get into new markets. / Tesis
2

IMPROVING MICROSERVICES OBSERVABILITY IN CLOUD-NATIVE INFRASTRUCTURE USING EBPF

Bhavye Sharma (15345346) 26 April 2023 (has links)
<p>Microservices have emerged as a popular pattern for developing large-scale applications in cloud environments for their flexibility, scalability, and agility benefits. However, microservices make management more complex due to their scale, multiple languages, and distributed nature. Orchestration and automation tools like Kubernetes help deploy microservices running simultaneously, but it can be difficult for an operator to understand their behaviors, interdependencies, and interactions. In such a complex and dynamic environment, performance problems (e.g., slow application responses and high resource usage)  require significant human effort spent on diagnosis and recovery. Moreover, manual diagnosis of cloud microservices tends to be tedious, time-consuming, and impractical. Effective and automated performance analysis and anomaly detection require an observable system, which means an application's internal state can be inferred by observing and tracking metrics, traces, and logs. Traditional APM uses libraries and SDKs to improve application monitoring and tracing but has additional overheads of rewriting, recompiling, and redeploying the applications' code base. Therefore, there is a critical need for a standardized automated microservices observability solution that does not require rewriting or redeploying the application to keep up with the agility of microservices.</p> <p><br></p> <p>This thesis studies observability for microservices and implements an automated Extended Berkeley Packet Filter (eBPF) based observability solution. eBPF is a Linux feature that allows us to write extensions to the Linux kernel for security and observability use cases. eBPF does not require modifying the application layer and instrumenting the individual microservices. Instead, it instruments the kernel-level API calls, which are common across all hosts in the cluster. eBPF programs provide observability information from the lowest-level system calls and can export data without additional performance overhead. The Prometheus time-series database is leveraged to store all the captured metrics and traces for analysis. With the help of our tool, a DevOps engineer can easily identify abnormal behavior of microservices and enforce appropriate countermeasures. Using Chaos Mesh, we inject anomalies at the network and host layer, which we can identify with root cause identification using the proposed solution. The Chameleon cloud testbed is used to deploy our solution and test its capabilities and limitations.</p>
3

Smart Security System Based on Edge Computing and Face Recognition

Heejae Han (9226565) 27 April 2023 (has links)
<p>Physical security is one of the most basic human needs. People care about it for various reasons; for the safety and security of personnel, to protect private assets, to prevent crime, and so forth. With the recent proliferation of AI, various smart physical security systems are getting introduced to the world. Many researchers and engineers are working on developing AI-driven physical security systems that have the capability to identify potential security threats by monitoring and analyzing data collected from various sensors. One of the most popular ways to detect unauthorized entrance to restricted space is using face recognition. With a collected stream of images and a proper algorithm, security systems can recognize faces detected from the image and send an alert when unauthorized faces are recognized. In recent years, there has been active research and development on neural networks for face recognition, e.g. FaceNet is one of the advanced algorithms. However, not much work has been done to showcase what kind of end-to-end system architecture is effective for running heavy-weight computational loads such as neural network inferences. Thus, this study explores different hardware options that can be used in security systems powered by a state-of-the-art face recognition algorithm and proposes that an edge computing based approach can significantly reduce the overall system latency and enhance the system reactiveness. To analyze the pros and cons of the proposed system, this study presents two different end-to-end system architectures. The first system is an edge computing-based system that operates most of the computational tasks at the edge node of the system, and the other is a traditional application server-based system that performs core computational tasks at the application server. Both systems adopt domain-specific hardware, Tensor Processing Units, to accelerate neural network inference. This paper walks through the implementation details of each system and explores its effectiveness. It provides a performance analysis of each system with regard to accuracy and latency and outlines the pros and cons of each system.</p> <p><br></p>
4

ENABLING REAL TIME INSTRUMENTATION USING RESERVOIR SAMPLING AND BIN PACKING

Sai Pavan Kumar Meruga (16496823) 30 August 2023 (has links)
<p><em>Software Instrumentation is the process of collecting data during an application’s runtime,</em></p> <p><em>which will help us debug, detect errors and optimize the performance of the binary. The</em></p> <p><em>recent increase in demand for low latency and high throughput systems has introduced new</em></p> <p><em>challenges to the process of Software Instrumentation. Software Instrumentation, especially</em></p> <p><em>dynamic, has a huge impact on systems performance in scenarios where there is no early</em></p> <p><em>knowledge of data to be collected. Naive approaches collect too much or too little</em></p> <p><em>data, negatively impacting the system’s performance.</em></p> <p><em>This thesis investigates the overhead added by reservoir sampling algorithms at different</em></p> <p><em>levels of granularity in real-time instrumentation of distributed software systems. Also, this thesis describes the implementation of sampling techniques and algorithms to reduce the overhead caused by instrumentation.</em></p>
5

Surveillance of Negative Binomial and Bernoulli Processes

Szarka, John Louis III 03 May 2011 (has links)
The evaluation of discrete processes are performed for industrial and healthcare processes. Count data may be used to measure the number of defective items in industrial applications or the incidence of a certain disease at a health facility. Another classification of a discrete random variable is for binary data, where information on an item can be classified as conforming or nonconforming in a manufacturing context, or a patient's status of having a disease in health-related applications. The first phase of this research uses discrete count data modeled from the Poisson and negative binomial distributions in a healthcare setting. Syndromic counts are currently monitored by the BioSense program within the Centers for Disease Control and Prevention (CDC) to provide real-time biosurveillance. The Early Aberration Reporting System (EARS) uses recent baseline information comparatively with a current day's syndromic count to determine if outbreaks may be present. An adaptive threshold method is proposed based on fitting baseline data to a parametric distribution, then calculating an upper-tailed p-value. These statistics are then converted to an approximately standard normal random variable. Monitoring is examined for independent and identically distributed data as well as data following several seasonal patterns. An exponentially weighted moving average (EWMA) chart is also used for these methods. The effectiveness of these methods in detecting simulated outbreaks in several sensitivity analyses is evaluated. The second phase of research explored in this dissertation considers information that can be classified as a binary event. In industry, it is desirable to have the probability of a nonconforming item, p, be extremely small. Traditional Shewhart charts such as the p-chart, are not reliable for monitoring this type of process. A comprehensive literature review of control chart procedures for this type of process is given. The equivalence between two cumulative sum (CUSUM) charts, based on geometric and Bernoulli random variables is explored. An evaluation of the unit and group--runs (UGR) chart is performed, where it is shown that the in--control behavior of this chart is quite misleading and should not be recommended for practitioners. / Ph. D.
6

<strong>Agbufferbuilder for decision support in the collaborative design of variable-width conservation buffers in the Saginaw Bay watershed</strong>

Patrick T Oelschlager (16636047) 03 August 2023 (has links)
<p>Field-edge buffers are a promising way to address nonpoint source pollution from agricultural runoff, but concentrated runoff flow often renders standard fixed-width linear buffers ineffective. AgBufferBuilder (ABB) is a tool within ESRI ArcMap Geographic Information Systems software that designs and evaluates targeted, nonlinear buffers based on hydrologic modeling and other field-specific parameters. We tested ABB on n=45 Areas of Interest (AOIs) stratified based on estimated sediment loading across three sub-watersheds within Michigan’s Saginaw Bay watershed to evaluate the effectiveness of ABB relative to existing practices across a wide range of landscape conditions. We modeled tractor movement around ABB buffer designs to assess more realistic versions of the likely final designs. ABB regularly failed to deliver the desired 75% sediment capture rate using default 9 m x 9 m output raster resolution, with Proposed buffers capturing from 0% to 68.49% of sediment within a given AOI (mean=37.56%). Differences in sediment capture between Proposed and Existing buffers (measured as Proposed – Existing) ranged from -48% to 66.81% of sediment (mean=24.70%). Proposed buffers were estimated to capture more sediment than Existing buffers in 37 of 45 AOIs, representing potential for real improvements over Existing buffers across the wider landscape. In 13 of 45 AOIs, ABB buffers modified for tractor movement captured more sediment than Existing buffers using less total buffer area. We conducted a collaborative design process with three Saginaw Bay watershed farmers to assess their willingness to implement ABB designs. Feedback indicated farmers may prefer in-field erosion control practices like cover cropping and grassed waterways over field-edge ABB designs. More farmer input is needed to better assess farmer perspectives on ABB buffers and to identify preferred data-based design alternatives. Engineered drainage systems with raised ditch berms and upslope catch basins piped underground directly into ditches were encountered several times during site visits. ABB only models surface flow and does not recognize drain output flow entering waterways. Modified ABB functionality that models buffers around drain inlets would greatly improve its functionality on drained sites. This may be accomplishable through modification of user-entered AOI margins but requires further investigation. Unfortunately, the existing tool is built for outdated software and is not widely accessible to non-expert users. We suggest that an update of this tool with additional functionality and user accessibility would be a useful addition in the toolbox of conservation professionals in agricultural landscapes.</p>
7

CyberWater: An open framework for data and model integration

Ranran Chen (18423792) 03 June 2024 (has links)
<p dir="ltr">Workflow management systems (WMSs) are commonly used to organize/automate sequences of tasks as workflows to accelerate scientific discoveries. During complex workflow modeling, a local interactive workflow environment is desirable, as users usually rely on their rich, local environments for fast prototyping and refinements before they consider using more powerful computing resources.</p><p dir="ltr">This dissertation delves into the innovative development of the CyberWater framework based on Workflow Management Systems (WMSs). Against the backdrop of data-intensive and complex models, CyberWater exemplifies the transition of intricate data into insightful and actionable knowledge and introduces the nuanced architecture of CyberWater, particularly focusing on its adaptation and enhancement from the VisTrails system. It highlights the significance of control and data flow mechanisms and the introduction of new data formats for effective data processing within the CyberWater framework.</p><p dir="ltr">This study presents an in-depth analysis of the design and implementation of Generic Model Agent Toolkits. The discussion centers on template-based component mechanisms and the integration with popular platforms, while emphasizing the toolkit’s ability to facilitate on-demand access to High-Performance Computing resources for large-scale data handling. Besides, the development of an asynchronously controlled workflow within CyberWater is also explored. This innovative approach enhances computational performance by optimizing pipeline-level parallelism and allows for on-demand submissions of HPC jobs, significantly improving the efficiency of data processing.</p><p dir="ltr">A comprehensive methodology for model-driven development and Python code integration within the CyberWater framework and innovative applications of GPT models for automated data retrieval are introduced in this research as well. It examines the implementation of Git Actions for system automation in data retrieval processes and discusses the transformation of raw data into a compatible format, enhancing the adaptability and reliability of the data retrieval component in the adaptive generic model agent toolkit component.</p><p dir="ltr">For the development and maintenance of software within the CyberWater framework, the use of tools like GitHub for version control and outlining automated processes has been applied for software updates and error reporting. Except that, the user data collection also emphasizes the role of the CyberWater Server in these processes.</p><p dir="ltr">In conclusion, this dissertation presents our comprehensive work on the CyberWater framework's advancements, setting new standards in scientific workflow management and demonstrating how technological innovation can significantly elevate the process of scientific discovery.</p>
8

Nonpoint Source Pollutant Modeling in Small Agricultural Watersheds with the Water Erosion Prediction Project

Ryan McGehee (14054223) 04 November 2022 (has links)
<p>Current watershed-scale, nonpoint source (NPS) pollution models do not represent the processes and impacts of agricultural best management practices (BMP) on water quality with sufficient detail. To begin addressing this gap, a novel process-based, watershed-scale, water quality model (WEPP-WQ) was developed based on the Water Erosion Prediction Project (WEPP) and the Soil and Water Assessment Tool (SWAT) models. The proposed model was validated at both hillslope and watershed scales for runoff, sediment, and both soluble and particulate forms of nitrogen and phosphorus. WEPP-WQ is now one of only two models which simulates BMP impacts on water quality in ‘high’ detail, and it is the only one not based on USLE sediment predictions. Model validations indicated that particulate nutrient predictions were better than soluble nutrient predictions for both nitrogen and phosphorus. Predictions of uniform conditions outperformed nonuniform conditions, and calibrated model simulations performed better than uncalibrated model simulations. Applications of these kinds of models in real-world, historical simulations are often limited by a lack of field-scale agricultural management inputs. Therefore, a prototype tool was developed to derive management inputs for hydrologic models from remotely sensed imagery at field-scale resolution. At present, only predictions of crop, cover crop, and tillage practice inference are supported and were validated at annual and average annual time intervals based on data availability for the various management endpoints. Extraction model training and validation were substantially limited by relatively small field areas in the observed management dataset. Both of these efforts contribute to computational modeling research and applications pertaining to agricultural systems and their impacts on the environment.</p>

Page generated in 0.1064 seconds