• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 265
  • 95
  • 31
  • 30
  • 22
  • 22
  • 9
  • 8
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 591
  • 153
  • 143
  • 78
  • 78
  • 70
  • 66
  • 61
  • 51
  • 50
  • 49
  • 47
  • 47
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Best Practices, Benefits and Obstacles When Conducting Continuous Delivery in Software-Intensive Projects

Hansson, Björn January 2017 (has links)
The goals with continuous delivery are to reduce the risk, cost, and time of releasing software to the stakeholders and the users. It is a process which can result in reliable releases and reducing errors in the software. Furthermore, there are some best practices to follow when conducting the continuous delivery process, involving version control, and build tools. There are however some obstacles and challenges for organizations when moving to continuous delivery. For example, complex environments, organizational problems, and lack of automated test cases. This master thesis investigates continuous delivery through a literature review, a multiple-case study, and fieldwork. The result can either be used by software engineers and organizations who are new to the continuous delivery concept. Or the result can be used by more experienced software engineers to gain more knowledge about existing obstacles and for further research.
302

A Comparison of CI/CD Tools on Kubernetes

Johansson, William January 2022 (has links)
Kubernetes is a fast emerging technological platform for developing and operating modern IT applications. The capacity to deploy new apps and change old ones at a faster rate with less chance of error is one of the key value proposition of the Kubernetes platform. A continuous integration and continuous deployment (CI/CD) pipeline is a crucial component of the technology. Such pipelines compile all updated code and do specific tests and may then automatically deploy the produced code artifacts to a running system. There is a thriving ecosystem of CI/CD tools. Tools can also be divided into two types: integrated and standalone. Integrated tools will be utilized for both pipeline phases, CI and CD. The standalone tools will be used just for one of the processes, which needs the usage of two independent programs to build up the pipeline. Some tools predate Kubernetes and may be converted to operate on Kubernetes, while others are new and designed specifically for usage with Kubernetes clusters. CD systems are classified as push-style (artifacts from outside the cluster are pushed into the cluster) or pull-style (CD tool running inside the cluster pulling built artifacts into the cluster). Pull- and push-style pipelines will have an impact on how cluster credentials are managed and if they ever need to leave the cluster. This thesis investigates the deployment time, fault tolerance, and access security of pipelines. Using a simple microservices application, a testing setup is created to measure the metrics of the pipelines. Drone, Argo Workflows, ArgoCD, and GoCD are the tools compared in this study. These tools are coupled to form various pipelines. The pipeline using Kubernetes-specific tools, Argo Workflows and ArgoCD, is the fastest, the pipeline with GoCD is somewhat slower, and the Drone pipeline is the slowest. The pipeline that used Argo Workflows and ArgoCD could also withstand failures. Theother pipelines that used Drone and GoCD were unable to recover and timed out. Pull pipelines handles the Kubernetes access differently to push pipelines as the Kubernetes cluster credentials does not have to leave the cluster, whereas push pipelines needs the cluster credentials in the external environment where the CD tool is running.
303

Network Update and Service Chain Management in Software Defined Networks

Chen, Yang, 0000-0003-0578-2016 January 2020 (has links)
Software Defined Networking (SDN) emerged in recent years to fundamentally change how we design, build and manage networks. To maximize the network utilization, its control plane needs to frequently update the data plane via flow migration as the network conditions change dynamically, which is known as network update. Network Function Virtualization (NFV) addresses the problems of traditional expensive hardware appliances by leveraging virtualization technology to implement network functions in software modules (middleboxes). These software modules, also called Virtual Network Functions (VNFs), are provisioned most commonly in modern networks to demonstrate their increasing importance. The technical combination of SDN and NFV enables network service providers to pick service locations from multiple available servers and maneuvers traffic through appropriate VNFs, which is known as VNF deployment. A service chain consists of multiple chained VNFs in some order. VNFs are executed on virtualization platforms, which makes them more prone to error compared with dedicated hardware. As a result, one important issue of service chain is its reliability, meaning that each type of VNF in a service chain acts properly on its function, which is known as service chain resilience. This dissertation lists our research on the above three mentioned topics in order to improve the network performance. Details are as follows: 1. Network Update: SDNs always need to migrate flows to update the network configuration for a better system performance. However, the existing literature does not take flow path overlapping information into consideration when flows’ routes are re-allocated. Consequently, congestion happens, resulting in deadlocks among flows and link resources, which will block the update process and cause severe packet loss. We propose multiple solutions with various kinds of leisure resources in the network. 2. VNF Deployment: We focus on the VNF deployment problem with different settings and constraints, including: (1) network topology; (2) vertex capacity constraint; (3) traffic-changing effect; (4) heterogeneous or homogeneous model for one VNF kind; (5) dependency relations between VNFs. We efficiently deploy VNF instances and at the same time make sure that the processing requirement of all flows are satisfied. 3. Resilient Service Chain Management: One effective way of ensuring VNF robustness is to provision redundancy in the form of deploying backup instances besides active ones. In order to guarantee the service chain reliability, we consider both the server resource allocation and the VNF backup assignment. We aim at minimizing the total cost in terms of transmission delay and rule changes. / Computer and Information Science
304

Development of a Pressure Sensing System Coupled with Deployable Machine Learning Models for Assessing Residual Limb Fit in Lower Limb Prosthetics

Lewter, Maxwell D 01 December 2024 (has links) (PDF)
Lower limb amputations pose significant challenges for patients, with over 150,000 cases annually in the U.S., leading to a high demand for effective prosthetics. However, only 43% of lower limb prosthetic users report satisfaction, primarily due to issues with socket fit, which is critical for comfort, stability, and preventing injury. This study presents a deployable sensing system for potentially real-time monitoring of prosthetic socket fit by using pressure sensors and convolutional neural networks (CNNs) to analyze the pressure distribution within the socket. A novel CNN architecture, utilizing both dilated and strided convolutions, is proposed to effectively capture spatial-temporal patterns in multivariate timeseries data, which is processed as an image. The system was designed for edge deployment on the Sony Spresense microcontroller, maintaining a small model size while achieving high accuracy. Results show that the CNN models, particularly those optimized with the stochastic gradient descent (SGD), demonstrated robustness and high transferability. This system provides a cost-effective, portable solution to improve prosthetic fit, enhancing patient care and preventing gait-related injuries.
305

Enhancing CryptoGuard's Deployability for Continuous Software Security Scanning

Frantz, Miles Eugene 21 May 2020 (has links)
The increasing development speed via Agile may introduce overlooked security steps in the process, with an example being the Iowa Caucus application. Verifying the protection of confidential information such as social security numbers requires security at all levels, providing protection through any connected applications. CryptoGuard is a static code analyzer for Java. This program verifies that developers do not leave vulnerabilities in their application. The program aids the developer by identifying cryptographic misuses such as hard-coded keys, weak program hashes, and using insecure protocols. In my Master thesis work, I made several important contributions to improving the deployability, accessibility, and usability of CryptoGuard. I extended CryptoGuard to scan source and compiled code, created live documentation, and supported a dual cloud and local tool-suite. I also created build tool plugins and a program aid for CryptoGuard. In addition, I also analyzed several Java-related surveys encompassing more than 50,000 developers and reported interesting current practices of real-world software developers. / Master of Science / Throughout the rise of software development, there has been an increase in development speed with developers embracing methodologies that use higher rates of changes, such as Agile. Since Agile naturally addresses "problems of rapid change", this also increases the likelihood of insecure and vulnerable coding practices. Though consumers depend on various public applications, there can still be failures throughout the development process in applications such as the Iowa caucus application. It was determined the Iowa cacus application development teams' repository credentials (API key) was left within the application itself. API keys provide the credential to be able to directly interact with server systems, and if left unguarded can be easily exploited. Since the Iowa cacus application was released publicly, malicious actors (other people looking to exploit the application) may have already discovered this credential. Within our team we have created CryptoGuard, a program to analyze applications to detect cryptographic issues such as an API key. Creating it with scalability in mind, it was created to be able to scan enterprise code at a reasonable speed. To ensure its use within companies, we have been working on extending and enhancing the work to the current needs of Java developers. Verifying the current Java landscape, we investigated three different companies and their developer ecosystem surveys that are publicly available. Amongst these companies are; JetBrains, known for their Integrated Development Environments (IDE, or application to help write applications) and their own programming language, Snyk, known for their public security platform and anti-virus capability, and Jakarta EE, which is the new platform for the enterprise version of Java. Throughout these surveys, we accumulate more than 50,000 developers' responses, spanning various countries, company experience, and ages. With their responses amalgamated, we enhance CryptoGuard to be available to as many developers and their requests as possible.First, CryptoGuard is enhanced to scan a projects source code. After that, ensuring our project is hosted by a cloud service, we actively are extending our project to the Security Assurance Marketplace (SWAMP). Funded by the DHS, SWAMP not only supplies a public cloud for developers to use, but a local download option to scan a program within the user's own computer. Next, we create a plugin for two most used build tools, Gradle and Maven. Then to ensure CryptoGuard can be have reactive aide, CryptoSoule is created to aide minimal interface aide. Finally utilizing a live documentation service, an open source documentation website was created to provide working examples to the community.
306

[pt] IMPLANTAÇÃO E MONITORAMENTO DE MODELOS DE SISTEMAS DE APRENDIZADO DE MÁQUINA: STATUS QUO E PROBLEMAS / [en] ML-ENABLED SYSTEMS MODEL DEPLOYMENT AND MONITORING: STATUS QUO AND PROBLEMS

EDUARDO ZIMELEWICZ 23 September 2024 (has links)
[pt] [Contexto] Sistemas que incorporam modelos de aprendizado de máquina(ML), muitas vezes chamados de sistemas de software habilitados para ML, tornaram-se comuns. No entanto, as evidências empíricas sobre como os sistemas habilitados para ML são projetados na prática ainda são limitadas; isto é especialmente verdadeiro para atividades relacionadas à disseminação do modelo de ML. [Objetivo] Investigamos práticas industriais contemporâneas e problemas relacionados à disseminação de modelos de ML, com foco nas fases de implantação do modelo e no monitoramento dentro do ciclo de vida de ML. [Método] Realizamos uma pesquisa on-line baseada em questionário internacional para coletar informações de profissionais sobre como os sistemas habilitados para ML são projetados. Reunimos 188 respostas completas de 25 países. Analisamos o status quo e os problemas relatados nas fases de implantação e monitoramento do modelo. Realizamos análises estatísticas sobre práticas contemporâneas utilizando bootstrapping com intervalos de confiança e análises qualitativas sobre os problemas relatados envolvendo procedimentos de codificação aberta e axial. [Resultados] Os profissionais consideram as fases de implantação e monitoramento do modelo relevantes, mas também difíceis. No que diz respeito à implantação de modelos, os modelos são normalmente implantados como serviços separados, com adoção limitada dos princípios de MLOps. Os problemas relatados incluem dificuldades no projeto da arquitetura da infraestrutura para implantação de produção e integração de aplicativos legados. No que diz respeito ao monitoramento de modelos, muitos dos modelos em produção não são monitorados. Os principais aspectos monitorados são insumos, produtos e decisões. Os problemas relatados envolvem a ausência de práticas de monitoramento, a necessidade de criar ferramentas de monitoramento personalizadas e desafios na seleção de métricas adequadas. [Conclusão] Nossos resultados já ajudam a fornecer uma melhor compreensão das práticas e problemas adotados na prática que apoiam a pesquisa em implantação de ML e monitoramento de maneira orientada a problemas. / [en] [Context] Systems that incorporate Machine Learning (ML) models, often referred to as ML-enabled systems, have become commonplace. However, empirical evidence on how ML-enabled systems are engineered in practice is still limited; this is especially true for activities surrounding ML model dissemination. [Goal] We investigate contemporary industrial practices and problems related to ML model dissemination, focusing on the model deployment and the monitoring ML life cycle phases. [Method] We conducted an international survey to gather practitioner insights on how ML-enabled systems are engineered. We gathered a total of 188 complete responses from 25 countries. We analyze the status quo and problems reported for the model deployment and monitoring phases. We analyzed contemporary practices using bootstrapping with confidence intervals and conducted qualitative analyses on the reported problems applying open and axial coding procedures. [Results] Practitioners perceive the model deployment and monitoring phases as relevant and difficult. With respect to model deployment, models are typically deployed as separate services, with limited adoption of MLOps principles. Reported problems include difficulties in designing the architecture of the infrastructure for production deployment and legacy application integration. Concerning model monitoring, many models in production are not monitored. The main monitored aspects are inputs, outputs, and decisions. Reported problems involve the absence of monitoring practices, the need to create custom monitoring tools, and the selection of suitable metrics. [Conclusion] Our results help provide a better understanding of the adopted practices and problems in practice and support guiding ML deployment and monitoring research in a problem-driven manner.
307

Target Cost and Quality Management in Kreditinstituten

Teetzmann, Eckart T. 26 July 2003 (has links) (PDF)
Die Wettbewerbssituation hat sich für Banken in den vergangenen Jahren deutlich verschärft. Auf höhere Kundenerwartungen hinsichtlich Preis und Qualität, sowie auf steigende Betriebskosten müssen Banken mit einer klaren Kundenorientierung und einem effizienten Kostenmanagement reagieren. Das in der Arbeit dargestellte Konzept des Target Cost and Quality Management (TCQM) beruht auf den Grundüberlegungen des Target Costing, wird jedoch zu einem integrierten Instrument der Planung und Steuerung der Kosten und der Qualität von Bankleistungen ausgebaut bzw. adaptiert. In der Arbeit werden zunächst vor dem Hintergrund verschiedener Auffassungen in der Literatur bezüglich des Charakters und der Systematisierung von Bankleistungen das Verständnis der Bankleistung thematisiert. Im Anschluß daran werden die für das TCQM grundlegenden Konzepte des Target Costing und des Total Quality Management/Quality Banking erläutert und daraus ein grobes Phasenmodell des TCQM abgeleitet. Dieses Phasenmodell wird in einen strategischen Rahmen eingebettet. Nachfolgend wird, aufbauend auf einer allgemeinen Darstellung des Prozesses der marktorientierten Bankleistungs-/Prozeßgestaltung, die Festlegung von Preis-, Qualitäts- und Kostenzielen detailliert erläutert. Grundlage für eine marktgerechte Zieldefinition ist die Identifikation und Bewertung von Kundenanforderungen. Für den konkreten Einsatz der in der Arbeit dargestellten Instrumente und Methoden ist insbesondere die Differenzierung der Kundenanforderungen nach Basis-, Leistungs- und Begeisterungsanforderungen sowie nach merkmals- bzw. ereignisorientierten Anforderungen relevant. Mit Hilfe verschiedener, aufeinander abgestimmter Tabellen können dann konkrete Ziele abgeleitet werden. Den Ausführungen zur Zielfestlegung schließt sich eine Darstellung von Methoden zur Unterstützung der Zielerreichung an. Ein Schwerpunkt wird auf das bankspezifische Prozeßkostenmanagement aufgrund seiner erfolgskritischen Bedeutung für das TCQM gelegt.
308

Target Cost and Quality Management in Kreditinstituten

Teetzmann, Eckart T. 16 April 2003 (has links)
Die Wettbewerbssituation hat sich für Banken in den vergangenen Jahren deutlich verschärft. Auf höhere Kundenerwartungen hinsichtlich Preis und Qualität, sowie auf steigende Betriebskosten müssen Banken mit einer klaren Kundenorientierung und einem effizienten Kostenmanagement reagieren. Das in der Arbeit dargestellte Konzept des Target Cost and Quality Management (TCQM) beruht auf den Grundüberlegungen des Target Costing, wird jedoch zu einem integrierten Instrument der Planung und Steuerung der Kosten und der Qualität von Bankleistungen ausgebaut bzw. adaptiert. In der Arbeit werden zunächst vor dem Hintergrund verschiedener Auffassungen in der Literatur bezüglich des Charakters und der Systematisierung von Bankleistungen das Verständnis der Bankleistung thematisiert. Im Anschluß daran werden die für das TCQM grundlegenden Konzepte des Target Costing und des Total Quality Management/Quality Banking erläutert und daraus ein grobes Phasenmodell des TCQM abgeleitet. Dieses Phasenmodell wird in einen strategischen Rahmen eingebettet. Nachfolgend wird, aufbauend auf einer allgemeinen Darstellung des Prozesses der marktorientierten Bankleistungs-/Prozeßgestaltung, die Festlegung von Preis-, Qualitäts- und Kostenzielen detailliert erläutert. Grundlage für eine marktgerechte Zieldefinition ist die Identifikation und Bewertung von Kundenanforderungen. Für den konkreten Einsatz der in der Arbeit dargestellten Instrumente und Methoden ist insbesondere die Differenzierung der Kundenanforderungen nach Basis-, Leistungs- und Begeisterungsanforderungen sowie nach merkmals- bzw. ereignisorientierten Anforderungen relevant. Mit Hilfe verschiedener, aufeinander abgestimmter Tabellen können dann konkrete Ziele abgeleitet werden. Den Ausführungen zur Zielfestlegung schließt sich eine Darstellung von Methoden zur Unterstützung der Zielerreichung an. Ein Schwerpunkt wird auf das bankspezifische Prozeßkostenmanagement aufgrund seiner erfolgskritischen Bedeutung für das TCQM gelegt.
309

Modularization and evaluation of vehicle’s electrical system

Abdo, Nawar January 2019 (has links)
Modularization is a strategy used by many companies, to help them provide their customers with a high variety of customized products efficiently. This is done through the customization of different independent modules, which are connected by standardized interfaces that are shared throughoutthe entire module variety. Scania, being one of the large companies that provide modular products, has been successfully improving their modularization concepts for many years, and is one of the most iconic companies when it comes to modularization of buses, trucks and engines. But with the increasing need ofelectronics integrated in the vehicles, it is becoming more and more important to modularize the electrical system. There is currently an existing, modularized, product architecture for the electrical system, and Scania wants to know how well modularized it is, as there is no unified way that indicates what is considered to be the better solution.To analyze the current state of the electrical system, a systematic method of modularization was used, which would help answer three important questions: Are the modules well defined? Is there a way to systematically compare alternative solutions? What criteria are more important to focus on? Since there is no unified way of modularization, many modularization methods have been created, and each one has been optimized for a certain purpose. This project compares three different modularization methods and then uses one of the methods which is deemed to be the preferred method to help provide the answers that the company seeks when investigating the modularity of the electrical system. As the electrical system is very complex, and the project has limited amount of resources, it was decided to choose one of the control units as an example, which was the APS (air processing system). The literature study showed that the most rewarding method to use was the MFD (Module Function Deployment), as it provides more information about the product and what criteria the company should focus on. It was then decided to use the relevant steps in MFD to analyze the state of the APS as an example of how this method works. / Modularisering är en strategi som används av många företag, för att hjälpa dem att erbjuda sina kunder en mängd olika anpassade produkter på ett effektivt sätt. Detta görs genom anpassning av olika oberoende moduler, som är kopplade med standardiserade gränssnitt som utnyttjas av alla modulvarianterna. Scania, som är ett av de stora företagen som erbjuder modulariserade produkter, har framgångsrikt förbättrat sina modulariseringskoncept under många år och är ett av de mest ikoniska företagen närdet gäller modularisering av bussar, lastbilar och motorer. Men med det ökande behovet av elektronik integrerad i fordonen blir det allt viktigare att modularisera det elektriska systemet. Det finns för närvarande en befintlig, modulär produktarkitektur för det elektriska systemet, och Scania vill veta hur väl modulariserat det är, eftersom det inte finns något enat sätt som anger vad som anses vara den bättre lösningen. För att analysera det elektriska systemets nuvarande tillstånd, måste en systematisk metod förmodularisering användas, vilket skulle hjälpa till att svara på tre viktiga frågor: Är modulerna väldefinierade? Finns det ett sätt att systematiskt jämföra alternativa lösningar? Vilka kriterier är viktigare att fokusera på? Eftersom det inte finns något enhetligt sätt att modularisera har många modulariseringsmetoder skapats, och var och en har optimerats för ett visst ändamål. I projektet jämförs tre olika modulariseringsmetoder och använder sedan en av de metoder som anses vara den föredragna metoden för att hjälpa till att ge svaren som företaget söker när man undersöker modulariteten hos det elektriska systemet. Eftersom det elektriska systemet är väldigt komplext och projektet har begränsat antal resurser beslutades det att välja en av kontrollenheterna som ett exempel, vilket var APS (luftbehandlingssystem). Litteraturstudien visade att den mest givande metoden att använda var MFD (Module FunctionDeployment), eftersom det ger mer information om produkten och vilka kriterier företaget ska fokusera på. Det bestämdes sedan att använda de relevanta stegen i MFD för att analysera APS tillståndet som ett exempel på hur den här metoden fungerar.
310

För ett effektivt operatörsunderhåll / For efficient operator maintenance

El Khabiry, Mohamed January 2021 (has links)
Detta examensarbete är utfört på Astrazeneca Södertälje inom Meto-försteg som är en del av fabriken API, med syftet att effektivisera underhållsarbetet. En kartläggning av det förebyggande underhållet genomfördes, i syfte att identifiera underhållsarbete som utförs dubbelt av både operatörer inom API och underhållsleverantören Caverion. Examensarbetet presenterar en redogörelse för begrepp inom underhåll, TPM (Total Productive Maintenance) och beslutsmodellen QFD (Quality Function Deployment) vilka utgör den teoretiska utgångspunkten för resultat och analys. Examensarbetet bygger på insamlade observationer och intervjumaterial från personal verksamma inom drift och underhåll. Studiens resultat visar att det genomförs överlappningar i underhållsarbetet, exempelvis inom veckotillsyn och förebyggande underhåll (FU) inom fabriksdelen Meto- försteg. Både operatörer och Caverion genomför underhållsåtgärder som innefattar visuell kontroll av oljenivåer, läckage, missljud, vibrationer för ett antal maskiner. En tydligare fördelning av underhållsåtgärder mellan operatörer och Caverion kommer bidra till effektiviseringen av underhållsarbetet. / This thesis was carried out at Astrazeneca Södertälje within Meto-pre stage, which is part of the factory API, intending to streamline the maintenance work. Apreventive maintenance survey was carried out, to identify maintenance work that was performed twice by both API operators and the maintenance provider Caverion. The thesis presents an account of concepts in maintenance, TPM (Total Productive Maintenance), and the decision model QFD (Quality Function Deployment) which constitute the theoretical starting point for results and analysis. The thesis is based on collected observations and interview materials from staff active in operations and maintenance. The results of the study show that there are overlaps in maintenance work, for example in weekly supervision/inspections and preventive maintenance (FU) within the factory part Meto-pre stage. Both operators and Caverion carry out maintenance operations that include a visual control of oil levels, leakage, noise, vibrations for several machines. A clearer distribution of maintenance measures between operators and Caverion will contribute to the streamlining of maintenance work.

Page generated in 0.0791 seconds