• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 8
  • 7
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 106
  • 27
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Avaliação de atributos de testabilidade para sistemas de suporte à decisão / Testability attributes assessment for decision support systems

Marcos Fernando Geromini 11 March 2016 (has links)
As organizações públicas e privadas são constantemente expostas a fatores internos e externos, que podem comprometer sua estabilidade diante das oscilações da economia e dos concorrentes. Nestas empresas, os tomadores de decisão são essenciais para analisar e avaliar todas as variáveis que envolvem estes fatores, com o objetivo de identificar o melhor caminho para os negócios. Entretanto, conseguir gerenciar os dados internos e externos à organização não é uma atividade simples. Neste contexto, os Sistemas de Suporte à Decisão (SSD) tornaram-se fundamentais para auxiliar os tomadores de decisão na solução de problemas mal estruturados ou sem nenhuma estruturação. Porém, a complexidade que envolve os projetos de implantação ou desenvolvimento de um SSD, geralmente compromete a efetividade dos testes que garantem a conformidade do sistema em relação às especificações previamente definidas. Uma solução para esse problema é considerar os atributos ou fatores de testabilidade nestes projetos, pois podem elevar o grau de eficácia e eficiência da atividade de teste e consequentemente contribuírem para redução do tempo e custos do projeto. Portanto, conseguir identificar esses atributos ou fatores que tenham influência na testabilidade dos SSD e algum método que permita analisar e avaliar o quanto estão presentes neste sistema, é essencial para aumentar a qualidade do sistema. Diante desta necessidade, este trabalho investigou e selecionou os principais fatores que podem influenciar no grau de testabilidade de um software e propôs um método para analisar e avaliar o quanto o SSD está considerando esses fatores em sua arquitetura. Com o objetivo de avaliar e validar o método de análise e avaliação, foram realizados testes de aplicabilidade em empresas de pequeno, médio e grande porte, bem como no meio acadêmico. Com os resultados obtidos nos testes, foi possível concluir que o método é específico para SSD, que pode ser usado como um guia durante o processo de desenvolvimento e auxiliar na classificação de SSD quanto a sua testabilidade. / Public and private organizations are constantly exposed to internal and external factors which could compromise their stability in the face of fluctuations in the economy and competitors. In these companies, decision makers are essential to analyze and evaluate all the variables regarding these factors, in order to identify the best way for business. However, managing internal and external data of the organization is not a simple activity. In this context, Decision Support Systems (DSS) have become essential to assist decision makers in solving unstructured problems or lock of structure. However, the complexity involved in the implementation of projects or development of a DSS usually compromises the effectiveness of tests that ensure compliance of the system in relation to previously defined specifications. One solution to this problem is to consider the attributes or testability factors in these projects, since they can raise the level of effectiveness and efficiency of testing activity and thus contribute to reducing the time and project costs. Therefore, the ability to identify these attributes or factors that influence testability of DSS and a process for analyzing and evaluating how much the present in this system, is essential to increase system quality. Given this need, this work investigated and selected the main factors that can influence the degree of testability of software and proposed a way to analyze and assess how the DSS is considering these factors in its architecture. In order to evaluate and validate the analysis and evaluation method, applicability tests were performed in small, medium and large companies, as well as in academy. As results obtained in the tests, it was concluded that the method is specific for DSS, which can be used as a guide during the development process and assist in the DSS classification regarding its testability.
82

Social Responsibility Guidelines & Sustainable Development : Integrating a Common Goal of a Sustainable Society

Dewangga, Anastasia, Goldsmith, Simon, Pegram, Neil January 2008 (has links)
Abstract: Given the global sustainability challenge; effective organizational Social Responsibility (SR) guidelines must set best-practices that acknowledge environmental constraints and strive for a sustainable society. SR has historically underrepresented environmental issues and needs to shift from a reactive focus on societal stakeholder demands, to a proactive whole-systems planning framework. There is a risk that unless SR guidelines consider both social and environmental issues together, they may generate negative outcomes to organizational viability. This research finds key Sustainable Development concepts that should be integrated within SR guidelines and uncovers an overall goal of SR as assisting organizations in moving towards a sustainable society. A Sustainable Society is defined in the research according to a set of scientific principles, based on environmental constraints and fundamental social needs. This clear goal enables the organization to ‘backcast’ from this success point in order to take effective strategic steps. The authors recommend the incorporation of critical concepts from Strategic Sustainable Development, a proven organizational sustainability planning framework, into SR guidelines to increase their effectiveness in strategic SR decision-making. The ISO 26000 SR Guideline is used as a case study.
83

Preserving Intangible Cultural Heritage to Facilitate a Transition towards Sustainability : A Case Study of Tibet's Tourism Industry

Pan, Bingbing, Shizhou, Yanni, Crone, Carl January 2007 (has links)
The purpose of this paper is to give suggestions for how to preserve intangible cultural heritage (ICH) towards sustainability. We will use Tibet as a case study. Understanding the importance of ICH for tourism, we scrutinize ICH through the lens of strategic sustainable development (SSD) and use tourism as a leverage point to enter into a real life situation. ICH is the root of all cultural expression. Without guarding ICH there is little meaning to the physical culture that remains and, ultimately, tourism declines. ICH is a new topic and there is little research and few ideas as to how to guide its preservation. We offer recommendations which include identifying the stakeholders, educating them, adequate marketing research especially in tourism, investing on technology of dematerialization and searching substitutions under the guidelines of the Golden Rule within the social sustainability context. Our contributions is to build a vision of success for preserving Tibetan ICH via tourism within the constraints of the four sustainability principles, and then demonstrate some prioritized actions in order to develop towards sustainability.
84

Workload Driven Designs for Cost-Effective Non-Volatile Memory Hierarchies

Timothy A Pritchett (9179468) 28 July 2020 (has links)
Compared to traditional hard-disk drives (HDDs), non-volatile (NV) memory technologies offer significant performance advantages on one hand, but also incur significant cost and asymmetric write-performance on the other. A common strategy to manage such cost- and performance-differentials is to use hierarchies such that a small, but intensely accessed, working set is staged in the NV storage (selective caching). However, when this working set includes write-heavy data, the low write-lifetime of NV storage necessitates significant over-provisioning to maintain required lifespans (e.g., storage lifespan must match or exceed 3 year server lifespan). One may think that employing DRAM-based write-buffers can filter writes that trickle through to the NV storage and thus alleviate the write-pressure felt at the NV storage. Unfortunately, selective caches, when used with common recency-based or frequency-based replacement, have access patterns that require large write buffers (e.g., 100MB+ relative to a 12GB cache) to filter writes adequately. Further, these large DRAM write-buffers also require backup-power to ensure the durability of disk writes. More sophisticated replacement policies that combine recency and frequency can reduce the size of the DRAM buffer (while preserving write-filtering), but are so computationally-expensive that they can limit the I/O rate, especially for simple controllers (e.g., RAID controller). <br>My first contribution is the design and implementation of WriteGuard– a self-tuning sieving write-buffer algorithm that filters writes as well as the highly-effective (but computationally-expensive) algorithms while requiring lightweight computation comparable to a simple LRU-based write-buffer. While WriteGuard reduces the capacity needed for DRAM buffering (to approx. 64 MB), it does not eliminate the need for DRAM buffers (and corresponding power backup).<br>For my second thrust, I identify two specific application characteristics – (1) the vast majority of the write-buffer’s contents is composed of write-dominant blocks, and (2) the vast majority of blocks in the write-buffer are overwritten within a period of 28 hours. I show that these characteristics help enable a high-density, optimized STT-MRAM as a replacement for DRAM, which enables durable write-buffers (thus eliminating the cost of power backup for the write-buffer). My optimized STT-MRAM-based write buffer achieves higher density by (a) trading off superfluous durability by exploiting characteristic (2), and (b) deoptimizing the read-performance of STT-MRAM by leveraging characteristic (1). Together, the techniques increase the density of STT-MRAM by 20% with low or no impact on write-buffer performance.<br>
85

Applying a Strategic Sustainable Development Lens to Supplier Network Collaboration

Gren, Kristina, Lotfalian, Ashkan, Ahmadi, Hassibullah January 2020 (has links)
A company cannot be more sustainable than its supply chain. Given their complexity and the need for collaborative, strategic action for sustainability across supplier networks this research takes a systems perspective to answer, “How can a Strategic Sustainable Development (SSD) lens support supplier network collaboration towards sustainability?”.The application of the SSD lens includes mapping barriers and enablers to collaboration for sustainability found in literature and a case company along with the Five-Level Model (5LM) to which we add complex adaptive system elements. Based on this a thematic analysis of the barriers and enablers is performed paper presents results of the 5LM and thematic analysis, finding that taking an SSD perspective shows interconnections across the multiple enablers and barriers to collaboration. The challenges encountered during 5LM along with the results implications for the Sustainable Supply Chain Management (SSCM) academic field and practitioners are discussed. We conclude that the variety and complexity of barriers and enablers for collaboration make it important to approach sustainability strategically across the supplier network. The SSD perspective supports collaboration for sustainability by providing an opportunity to examine it from a systems perspective and to formulate prescriptive considerations for the case company and guiding questions for SSCM practitioners.
86

Comparing CNN methods for detection and tracking of ships in satellite images / Jämförelse av CNN-baserad machine learning för detektion och spårning av fartyg i satellitbilder

Torén, Rickard January 2020 (has links)
Knowing where ships are located is a key factor to support safe maritime transports, harbor management as well as preventing accidents and illegal activities at sea. Present international solutions for geopositioning in the maritime domain exist such as the Automatic Identification System (AIS). However, AIS requires the ships to constantly transmit their location. Real time imaginary based on geostationary satellites has recently been proposed to complement the existing AIS system making locating and tracking more robust. This thesis investigated and compared two machine learning image analysis approaches – Faster R-CNN and SSD with FPN – for detection and tracking of ships in satellite images. Faster R-CNN is a two stage model which first proposes regions of interest followed by detection based on the proposals. SSD is a one stage model which directly detects objects with the additional FPN for better detection of objects covering few pixels. The MAritime SATellite Imagery dataset (MASATI) was used for training and evaluation of the candidate models with 5600 images taken from a wide variety of locations. The TensorFlow Object Detection API was used for the implementation of the two models. The results for detection show that Faster R-CNN achieved a 30.3% mean Average Precision (mAP) while SSD with FPN achieved only 0.0005% mAP on the unseen test part of the dataset. This study concluded that Faster R-CNN is a candidate for identifying and tracking ships in satellite images. SSD with FPN seems less suitable for this task. It is also concluded that the amount of training and choice of hyper-parameters impacted the results.
87

Dataset Evaluation Method for Vehicle Detection Using TensorFlow Object Detection API / Utvärderingsmetod för dataset inom fordonsigenkänning med användning avTensorFlow Object Detection API

Furundzic, Bojan, Mathisson, Fabian January 2021 (has links)
Recent developments in the field of object detection have highlighted a significant variation in quality between visual datasets. As a result, there is a need for a standardized approach of validating visual dataset features and their performance contribution. With a focus on vehicle detection, this thesis aims to develop an evaluation method utilized for comparing visual datasets. This method was utilized to determine the dataset that contributed to the detection model with the greatest ability to detect vehicles. The visual datasets compared in this research were BDD100K, KITTI and Udacity, each one being trained on individual models. Applying the developed evaluation method, a strong indication of BDD100K's performance superiority was determined. Further analysis and feature extraction of dataset size, label distribution and average labels per image was conducted. In addition, real-world experimental conduction was performed in order to validate the developed evaluation method. It could be determined that all features and experimental results pointed to BDD100K's superiority over the other datasets, validating the developed evaluation method. Furthermore, the TensorFlow Object Detection API's ability to improve performance gain from a visual dataset was studied. Through the use of augmentations, it was concluded that the TensorFlow Object Detection API serves as a great tool to increase performance gain for visual datasets. / Inom fältet av objektdetektering har ny utveckling demonstrerat stor kvalitetsvariation mellan visuella dataset. Till följd av detta finns det ett behov av standardiserade valideringsmetoder för att jämföra visuella dataset och deras prestationsförmåga. Detta examensarbete har, med ett fokus på fordonsigenkänning, som syfte att utveckla en pålitlig valideringsmetod som kan användas för att jämföra visuella dataset. Denna valideringsmetod användes därefter för att fastställa det dataset som bidrog till systemet med bäst förmåga att detektera fordon. De dataset som användes i denna studien var BDD100K, KITTI och Udacity, som tränades på individuella igenkänningsmodeller. Genom att applicera denna valideringsmetod, fastställdes det att BDD100K var det dataset som bidrog till systemet med bäst presterande igenkänningsförmåga. En analys av dataset storlek, etikettdistribution och genomsnittliga antalet etiketter per bild var även genomförd. Tillsammans med ett experiment som genomfördes för att testa modellerna i verkliga sammanhang, kunde det avgöras att valideringsmetoden stämde överens med de fastställda resultaten. Slutligen studerades TensorFlow Object Detection APIs förmåga att förbättra prestandan som erhålls av ett visuellt dataset. Genom användning av ett modifierat dataset, kunde det fastställas att TensorFlow Object Detection API är ett lämpligt modifieringsverktyg som kan användas för att öka prestandan av ett visuellt dataset.
88

Pruning a Single-Shot Detector for Faster Inference : A Comparison of Two Pruning Approaches / Beskärning av en enstegsdetektor för snabbare prediktering : En jämförelse av två beskärningsmetoder för djupa neuronnät

Beckman, Karl January 2022 (has links)
Modern state-of-the-art object detection models are based on convolutional neural networks and can be divided into single-shot detectors and two-stage detectors. Two-stage detectors exhibit impressive detection performance but their complex pipelines make them slow. Single-shot detectors are not as accurate as two-stage detectors, but are faster and can be used for real-time object detection. Despite the fact that single-shot detectors are faster, a large number of calculations are still required to produce a prediction that not many embedded devices are capable of doing in a reasonable time. Therefore, it is natural to ask if single-shot detectors could become faster even. Pruning is a technique to reduce the size of neural networks. The main idea behind network pruning is that some model parameters are redundant and do not contribute to the final output. By removing those redundant parameters, fewer computations are needed to produce predictions, which may lead to a faster inference and since the parameters are redundant, the model accuracy should not be affected. This thesis investigates two approaches for pruning the SSD-MobileNet- V2 single-shot detector. The first approach prunes the single-shot detector by a large portion and retrains the remaining parameters only once. In the other approach, a smaller portion is pruned, but pruning and retraining are done in an iterative fashion, where pruning and retraining constitute one iteration. Beyond comparing two pruning approaches, the thesis also studies the tradeoff between model accuracy and inference speed that pruning induces. The results from the experiments suggest that the iterative pruning approach preserves the accuracy of the original model better than the other approach where pruning and finetuning are performed once. For all four pruning levels that the two approaches are compared iterative pruning yields more accurate results. In addition, an inference evaluation indicates that iterative pruning is a good compression method for SSD-MobileNet-V2, finding models that both are faster and more accurate than the original model. The thesis findings could be used to guide future pruning research on SSD-MobileNet- V2, but also on other single-shot detectors such as RetinaNet and the YOLO models. / Moderna modeller för objektsdetektering bygger på konvolutionella neurala nätverk och kan delas in i ensteg- och tvåstegsdetektorer. Tvåstegsdetektorer uppvisar imponerande detektionsprestanda, men deras komplexa pipelines gör dem långsamma. Enstegsdetektorer uppvisar oftast inte lika bra detektionsprestanda som tvåstegsdetektorer, men de är snabbare och kan användas för objektdetektering i realtid. Trots att enstegsdetektorer är snabbare krävs det fortfarande ett stort antal beräkningar för att få fram en prediktering, vilket inte många inbyggda enheter kan göra på rimlig tid. Därför är det naturligt att fråga sig om enstegsdetektorer kan bli ännu snabbare. Nätverksbeskärning är en teknik för att minska storleken på neurala nätverk. Huvudtanken bakom nätverksbeskärning är att vissa modellparametrar är överflödiga och inte bidrar till det slutliga resultatet. Genom att ta bort dessa överflödiga parametrar krävs färre beräkningar för att producera en prediktering, vilket kan leda till att nätverket blir snabbare och eftersom parametrarna är överflödiga bör modellens detektionsprestanda inte påverkas. I den här masteruppsatsen undersöks två metoder för att beskära enstegsdetektorn SSD-MobileNet-V2. Det första tillvägagångssättet går ut på att en stor del av detektorn vikter beskärs och att de återstående parametrarna endast finjusteras en gång. I det andra tillvägagångssättet beskärs en mindre del, men beskärning och finjustering sker på ett iterativt sätt, där beskärning och finjustering utgör en iteration. Förutom att jämföra två metoder för beskärning studeras i masteruppsatsen också den kompromiss mellan modellens detektionsprestanda och inferenshastighet som beskärningen medför. Resultaten från experimenten tyder på att den iterativa beskärningsmetoden bevarar den ursprungliga modellens detektionsprestanda bättre än den andra metoden där beskärning och finjustering utförs en gång. För alla fyra beskärningsnivåer som de två metoderna jämförs ger iterativ beskärning mer exakta resultat. Dessutom visar en hastighetsutvärdering att iterativ beskärning är en bra komprimeringsmetod för SSD-MobileNet-V2, eftersom modeller som både snabbare och mer exakta än den ursprungliga modellen går att hitta. Masteruppsatsens resultat kan användas för att vägleda framtida forskning om beskärning av SSD-MobileNet-V2, men även av andra enstegsdetektorer, t.ex. RetinaNet och YOLO-modellerna.
89

Ecological risk assessment of pesticide use in rice farming in the Mekong Delta , Vietnam

Dirikumo, Bubaraye Ohiosimuan January 2023 (has links)
Pesticide use in rice farming is a common practice in the Mekong Delta and poses ecological risks to aquatic organisms, the environment, and human health. This study focused on the ecological risk assessment of pesticide use in rice farming using the PRIMET model as a decision support tool to evaluate the risks of pesticide exposure, ecotoxicity, and risk characterization, as well as employing the species sensitivity distribution (SSD) assessment model to calculate the potentially affected fraction (PAF) of species based on the computed predicted environmental concentrations (PECs) from PRIMET. The study involved collating and analyzing data on pesticide inventories and the application of 138 farmers, which formed the basis for pesticide use, farming practices,environmental variables, and ecological indicators from two provinces in the Mekong Delta: Dong Thap and Hau Giang. This study showed that pesticide use was high with a wide range of pesticide types. The ecotoxicity assessment indicated that some pesticides pose a potential acute and chronic risk to non-target organisms. The active ingredient identified as posing acute toxicity risk with an ETR &gt;100 is the insecticide indoxacarb, which belongs to the chemical class of oxadiazine of which Arthropods were seen to be highly sensitive to when exposed making them more at risk even at very low concentrations. In contrast, fish generally exhibit moderate tolerance and are sensitive to certain chemicals. The risk characterization revealed that the ecological risks of pesticide use were higher in Dong Thap than in Hau Giang due to differences in ecological conditions, pesticide practices, and farming systems. Overall, this study highlights the need for improved pesticide management practices in rice farming in the Mekong Delta region to reduce ecological risks and protect the environment and human health. The practical and theoretical implications of this study are discussed.
90

Towards Manifesting Reliability Issues In Modern Computer Systems

Zheng, Mai 02 September 2015 (has links)
No description available.

Page generated in 0.405 seconds