101 |
Introducing software testing in an SME : An investigation of software testing in a web application / Introduktion av mjukvarutestning i ett SMF : En undersökning av mjukvarutestning i en webbapplikationArn, Per January 2023 (has links)
Quality assurance and software testing of software artifacts is as important as ever and this is especially so the case in web applications. The web applications of today are more complex and are used in more critical systems at a larger scale than ever before. However, testing of these applications is very challenging due to their dynamic nature. It is somewhat challenging to find clear and up-to-date guidelines on how to implement and evaluate regression software testing in small and medium-size enterprises (SME’s) developing web applications. The purpose of this thesis was to investigate the problem at hand and propose an approach to implementing software regression testing in web applications for SME’s. That is, recommending what to test, recommending what kind of software testing could be implemented and using what state of the art front end testing frameworks. An in-depth literature study was conducted to see what had been done in the past and present. Two rounds of semi structured in-depth interviews were conducted with software developers at the company where this thesis was conducted. The main purpose of the first interview was to find business goals from which to derive and subsequently create a testing suite in four different testing frameworks; Cypress, Jest, Playwright and Vitest. The purpose of the second interview was to evaluate and compare the aforementioned testing suites in order to propose an approach on software testing in web applications. In addition, code coverage and mutation scoring was also considered when evaluating the testing suites. The findings of this thesis is that a reasonable approach of introducing software testing into an SME which develops a web application, is to use business requirements for generating test cases and prioritizing end-to-end testing since the perceived benefits of the latter in this thesis far outweigh the benefits of the component testing suites although a combination of both would be the best of both worlds. Although this thesis was conducted on a web application written in React, the findings and recommendations can be applied to any front end framework such as Angular or Vue. / Kvalitetssäkring och testning av mjukvara är lika viktigt som alltid och detta är i synnerhet även fallet i webbapplikationer. Dagens webbapplikationer är mer komplexa och används i mer kritiska system på en större skala än någonsin tidigare. Dessvärre är det svårt att testa dessa applikationer eftersom att de är dynamiska. Det är svårt att hitta riktlinjer för hur man ska implementera och utvärdera regressionstester på små och medelstora företag (SMF) som utvecklar webbapplikationer. Syftet med denna uppsats var att undersöka problemet och föreslå en riktlinje för hur man kan implementera regressionstestning i SMF och i webbapplikationer. Detta innebär att föreslå vad man kan testa, vilken form av mjukvarutestning man kan implementera och med vilka moderna testningsramverk man kan göra detta med. En ingående litteraturstudie genomfördes för att ta reda på vad som hade gjorts tidigare inom området. Två rundor av semistrukturerade intervjuer genomfördes med mjukvaruutvecklarna på företaget där uppsatsen genomfördes. Syftet med den första intervjun var att hitta företagsmål som sedan agerade grund till testningssviter i fyra olika ramverk; Cypress, Jest, Playwright och Vitest. Syftet med den andra intervjun var att utvärdera och jämföra dessa testsviter för att rekommendera ett tillvägagångssätt för att implementera mjukvarutestning i webbapplikationer. Utöver intervjuerna så bidrog mutationspoäng och kodtäckning till rekommendationerna. Uppsatsen finner att ett rimligt sätt att implementera regressionstester i ett SMF och en webbapplikation är att generera testfall utifrån affärskrav och att prioritera testning på användarnivå eftersom att fördelarna från denna nivå av testning överväger fördelarna från komponenttestning. Allra helst bör man implementera en kombination av båda nivåerna. Fastän denna uppsats undersökte en webbapplikation i React så kan dessa upptäckter och rekommendationer även tillämpas på vilket frontendramverk som helst så som exempelvis Angular eller Vue.
|
102 |
A New Framework For Qos Provisioning In Wireless Lans Using The P-persistent Mac ProtocolAnna, Kiran Babu 01 January 2010 (has links)
The support of multimedia traffic over IEEE 802.11 wireless local area networks (WLANs) has recently received considerable attention. This dissertation has proposed a new framework that provides efficient channel access, service differentiation and statistical QoS guarantees in the enhanced distributed channel access (EDCA) protocol of IEEE 802.11e. In the first part of the dissertation, the new framework to provide QoS support in IEEE 802.11e is presented. The framework uses three independent components, namely, a core MAC layer, a scheduler, and an admission control. The core MAC layer concentrates on the channel access mechanism to improve the overall system efficiency. The scheduler provides service differentiation according to the weights assigned to each Access Category (AC). The admission control provides statistical QoS guarantees. The core MAC layer developed in this dissertation employs a P-Persistent based MAC protocol. A weight-based fair scheduler to obtain throughput service differentiation at each node has been used. In wireless LANs (WLANs), the MAC protocol is the main element that determines the efficiency of sharing the limited communication bandwidth of the wireless channel. In the second part of the dissertation, analytical Markov chain models for the P-Persistent 802.11 MAC protocol under unsaturated load conditions with heterogeneous loads are developed. The Markov models provide closed-form formulas for calculating the packet service time, the packet end-to-end delay, and the channel capacity in the unsaturated load conditions. The accuracy of the models has been validated by extensive NS2 simulation tests and the models are shown to give accurate results. In the final part of the dissertation, the admission control mechanism is developed and evaluated. The analytical model for P-Persistent 802.11 is used to develop a measurement-assisted model-based admission control. The proposed admission control mechanism uses delay as an admission criterion. Both distributed and centralized admission control schemes are developed and the performance results show that both schemes perform very efficiently in providing the QoS guarantees. Since the distributed admission scheme control does not have a complete state information of the WLAN, its performance is generally inferior to the centralized admission control scheme. The detailed performance results using the NS2 simulator have demonstrated the effectiveness of the proposed framework. Compared to 802.11e EDCA, the scheduler consistently achieved the desired throughput differentiation and easy tuning. The core MAC layer achieved better delays in terms of channel access, average packet service time and end-to-end delay. It also achieved higher system throughput than EDCA for any given service differentiation ratio. The admission control provided the desired statistical QoS guarantees.
|
103 |
Self-assembly of lyotropic chromonic liquid crystals: Effects of additives and applicationsPark, Heung-Shik 30 November 2010 (has links)
No description available.
|
104 |
Principles and Methods of Adaptive Network Algorithm Design under Various Quality-of-Service RequirementsLi, Ruogu 19 December 2012 (has links)
No description available.
|
105 |
Prediction of 5G system latency contribution for 5GC network functions / Förutsägelse av 5G-systemets latensbidrag för 5GC-nätverksfunktionerCheng, Ziyu January 2023 (has links)
End-to-end delay measurement is deemed crucial for network models at all times as it acts as a pivotal metric of the model’s effectiveness, assists in delineating its performance ceiling, and stimulates further refinement and enhancement. This premise holds true for 5G Core Network (5GC) models as well. Commercial 5G models, with their intricate topological structures and requirement for reduced latencies, necessitate an effective model to anticipate each server’s current latency and load levels. Consequently, the introduction of a model for estimating the present latency and load levels of each network element server would be advantageous. The central content of this article is to record and analyze the packet data and CPU load data of network functions running at different user counts as operational data, with the data from each successful operation of a service used as model data for analyzing the relationship between latency and CPU load. Particular emphasis is placed on the end-to-end latency of the PDU session establishment scenario on two core functions - the Access and Mobility Management Function (AMF) and the Session Management Function (SMF). Through this methodology, a more accurate model has been developed to review the latency of servers and nodes when used by up to 650, 000 end users. This approach has provided new insights for network level testing, paving the way for a comprehensive understanding of network performance under various conditions. These conditions include strategies such as "sluggish start" and "delayed TCP confirmation" for flow control, or overload situations where the load of network functions exceeds 80%. It also identifies the optimal performance range. / Latensmätningar för slutanvändare anses vara viktiga för nätverksmodeller eftersom de fungerar som en måttstock för modellens effektivitet, hjälper till att definiera dess prestandatak samt bidrar till vidare förfining och förbättring. Detta antagande gäller även för 5G kärnnätverk (5GC). Kommersiella 5G-nätverk med sin komplexa topologi och krav på låg latens, kräver en effektiv modell för att prediktera varje servers aktuella last och latensbidrag. Följdaktligen behövs en modell som beskriver den aktuella latensen och dess beroende till lastnivå hos respektive nätverkselement. Arbetet består i att samla in och analysera paketdata och CPU-last för nätverksfunktioner i drift med olika antal slutanvändare. Fokus ligger på tjänster som används som modelldata för att analysera förhållandet mellan latens och CPU-last. Särskilt fokus läggs på latensen för slutanvändarna vid PDU session-etablering för två kärnfunktioner – Åtkomst- och mobilitetshanteringsfunktionen (AMF) samt Sessionshanteringsfunktionen (SMF). Genom denna metodik har en mer exakt modell tagits fram för att granska latensen för servrar och noder vid användning av upp till 650 000 slutanvändare. Detta tillvägagångssätt har givit nya insikter för nätverksnivåtestningen, vilket banar väg för en omfattande förståelse för nätverprestanda under olika förhållanden. Dessa förhållanden inkluderar strategier som ”trög start” och ”fördröjd TCP bekräftelse” för flödeskontroll, eller överlastsituationer där lasten hos nätverksfunktionerna överstiger 80%. Det identifierar också det optimala prestandaområdet.
|
106 |
GAN-based Automatic Segmentation of Thoracic Aorta from Non-contrast-Enhanced CT Images / GAN-baserad automatisk segmentering avthoraxorta från icke-kontrastförstärkta CT-bilderXu, Libo January 2021 (has links)
The deep learning-based automatic segmentation methods have developed rapidly in recent years to give a promising performance in the medical image segmentation tasks, which provide clinical medicine with an accurate and fast computer-aided diagnosis method. Generative adversarial networks and their extended frameworks have achieved encouraging results on image-to-image translation problems. In this report, the proposed hybrid network combined cycle-consistent adversarial networks, which transformed contrast-enhanced images from computed tomography angiography to the conventional low-contrast CT scans, with the segmentation network and trained them simultaneously in an end-to-end manner. The trained segmentation network was tested on the non-contrast-enhanced CT images. The synthetic process and the segmentation process were also implemented in a two-stage manner. The two-stage process achieved a higher Dice similarity coefficient than the baseline U-Net did on test data, but the proposed hybrid network did not outperform the baseline due to the field of view difference between the two training data sets.
|
107 |
<b>PROCESS INTENSIFICATION OF INTEGRATED CONTINUOUS CRYSTALLIZATION SYSTEMS WITH RECYCLE</b>Rozhin Rojan Parvaresh (14093547) 23 July 2024 (has links)
<p dir="ltr">The purification of most active pharmaceutical ingredients (APIs) is primarily achieved through crystallization, conducted in batch, semi-batch, or continuous modes. Recently, continuous crystallization has gained interest in the pharmaceutical industry for its potential to reduce manufacturing costs and maintenance. Crystal characteristics such as size, purity, and polymorphism significantly affect downstream processes like filtration and tableting, as well as physicochemical properties like bioavailability, flowability, and compressibility. Developing an optimal operation that meets the critical quality attributes (CQAs) of these crystal properties is essential.</p><p dir="ltr">This dissertation begins by focusing on designing an innovative integrated crystallization system to enhance control over crystalline material properties. The system expands the attainable region of crystal size distribution (CSD) by incorporating multiple Mixed-Suspension Mixed-Product Removal (MSMPR) units and integrating wet milling, classification, and a recycle loop, enhancing robustness and performance. Extensive simulations and experimental data validate the framework, demonstrating significant improvements in efficiency and quality. The framework is further generalized to optimize crystallizer networks for controlling critical quality attributes such as mean size, yield, and CSD by evaluating various network configurations to identify optimal operating parameters.</p><p dir="ltr">The final part of this work concentrates on using the framework to improve continuous production of a commercial API, Atorvastatin calcium (ASC), aiming for higher yield and lower costs. This approach establishes an attainable region to increase crystal sizes and productivity. Due to ASC’s nucleation-dominated nature, the multi-stage system could not grow the crystals sufficiently to bypass granulation, the bottleneck process in ASC manufacturing. Therefore, spherical agglomeration was proposed as an intensification process within an integrated two-stage crystallization spherical agglomeration system to control the size and morphology of ASC crystals and improve downstream processing and tableting. This method proved highly successful, leading to the development of an end-to-end continuous manufacturing process integrating reaction, crystallization, spherical agglomeration, filtration, and drying. This modular system effectively addressed challenges in integrating various unit operations into a coherent continuous process with high production rates.</p>
|
108 |
From Historical Newspapers to Machine-Readable Data: The Origami OCR PipelineLiebl, Bernhard, Burghardt, Manuel 20 June 2024 (has links)
While historical newspapers recently have gained a lot of attention in the digital humanities, transforming them into machine-readable data by means of OCR poses some major challenges. In order
to address these challenges, we have developed an end-to-end OCR pipeline named Origami. This
pipeline is part of a current project on the digitization and quantitative analysis of the German
newspaper “Berliner Börsen-Zeitung” (BBZ), from 1872 to 1931. The Origami pipeline reuses existing open source OCR components and on top offers a new configurable architecture for layout
detection, a simple table recognition, a two-stage X-Y cut for reading order detection, and a new
robust implementation for document dewarping. In this paper we describe the different stages of the
workflow and discuss how they meet the above-mentioned challenges posed by historical newspapers.
|
109 |
Modélisations de la dynamique trophique d'un écosystème Méditerranéen exploité : le Golfe de Gabès (Tunisie) / Modeling the trophic dynamics of an exploited Mediterranean ecosystem : the Gulf of Gabes (Tunisia)Halouani, Ghassen 05 December 2016 (has links)
L’objectif de cette thèse est d’améliorer la compréhension du fonctionnement et de la structure trophique du golfe de Gabès en Tunisie. Afin de concilier exigences écologiques et exploitation des ressources marines, différents modèles écosystémiques ont été développés pour étudier sa dynamique trophique et contribuer à la réflexion sur la mise en place de plans de gestion. Un modèle trophique d’équilibre de masse « Ecospace » a été construit afin d’évaluer les conséquences écosystémiques de différentes mesures de gestion. Les résultats des simulations ont permis d’explorer les interactions entre la pêche côtière et la pêche au chalut benthique et d’identifier des zones où les mesures de gestion sont effectives. Un modèle end-to-end a également été appliqué pour expliciter la dynamique des espèces considérées, depuis le forçage climatique jusqu'à la pêche. Cette approche de modélisation consiste à forcer le modèle individu-centré « OSMOSE » par un modèle biogéochimique « Eco3MMed ». Ce modèle a permis d’établir une représentation cohérente du réseau trophique et de simuler des scénarios de gestion théoriques de mise en réserve. Le modèle end-to-end a également été utilisé pour étudier la sensibilité d’un ensemble d’indicateurs écologiques à la pression de pêche. Les résultats ont révélé que les indicateurs de taille sont les plus adaptés pour faire le suivi de l’impact de la pêche dans le golfe de Gabès. Au final, une approche comparative entre plusieurs écosystèmes méditerranéens a été mise en place avec le modèle EcoTroph pour comparer leurs structures trophiques et explorer les effets de plusieurs niveaux d’exploitation par l’analyse de leurs spectres trophiques. / The objective of this thesis is to improve the understanding of trophic structure and functioning of the gulf of Gabes in Tunisia. In order to reconcile environmental concerns and exploitation of marine resources, different ecosystem models have been developed to study the ecosystem dynamics and contribute to the discussion on the implementation of management plans. A spatial and temporal dynamic model “Ecospace” was built to evaluate the ecosystem consequences of different management measures based on scenarios derived from the current regulation. The results of simulations allowed to investigate the interactions between coastal and benthic trawl fishing and to identify areas where management measures are effective. An end-to-end model has been applied to the gulf of Gabes ecosystem to represent the dynamics of 11 high trophic level species, from climate forcing to fishing.This modelling approach consists in forcing the individual-based model "OSMOSE" by a biogeochemical model "ECO3M-Med". This model allowed to establish a coherent representation of the food web and simulate theoretical management scenarios of spatial fishing closure. The end-to-end model has also been used to study the sensitivity of a set of ecological indicators to fishing pressure. The simulation of different levels of fishing mortality showed that size indicators were the most relevant to monitor the impact of fishing in the gulf of Gabes. Finally, a comparative approach between several Mediterranean ecosystems was applied using the EcoTroph model to compare their trophic structures and explore the effects of different levels of fishing pressure through the analysis of their trophic spectra.
|
110 |
Autonomous Driving with Deep Reinforcement LearningZhu, Yuhua 17 May 2023 (has links)
The researcher developed an autonomous driving simulation by training an end-to-end policy model using deep reinforcement learning algorithms in the Gym-duckietown virtual environment. The control strategy of the model was designed for the lane-following task. Several reinforcement learning algorithms were implemented and the SAC algorithm was chosen to train a non-end-to-end model with the information provided by the environment such as speed as input values, as well as an end-to-end model with images captured by the agent's front camera as input. In this paper, the researcher compared the advantages and disadvantages of the two models using kinetic parameters in the environment and conducted a series of experiments on the control strategy of the end-to-end model to explore the effects of different environmental parameters or reward functions on the models.:CHAPTER 1 INTRODUCTION 1
1.1 AUTONOMOUS DRIVING OVERVIEW 1
1.2 RESEARCH QUESTIONS AND METHODS 3
1.2.1 Research Questions 3
1.2.2 Research Methods 4
1.3 PAPER STRUCTURE 5
CHAPTER 2 RESEARCH BACKGROUND 7
2.1 RESEARCH STATUS 7
2.2 THEORETICAL BASIS 8
2.2.1 Machine Learning 8
2.2.2 Deep Learning 9
2.2.3 Reinforcement Learning 11
2.2.4 Deep Reinforcement Learning 14
CHAPTER 3 METHOD 15
3.1 SIMULATION PLATFORM 16
3.2 CONTROL TASK 17
3.3 OBSERVATION SPACE 18
3.3.1 Information as Observation (Non-end-to-end) 19
3.3.2 Images as Observation (End-to-end) 20
3.4 ACTION SPACE 22
3.5 ALGORITHM 23
3.5.1 Mathematical Foundations 23
3.5.2 Policy Iteration 25
3.6 POLICY ARCHITECTURE 25
3.6.1 Network Architecture for Non-end-to-end Model 26
3.6.2 Network Architecture for End-to-end Model 28
3.7 REWARD SHAPING 29
3.7.1 Calculation of Speed-based Reward Function 30
3.7.2 Calculation of the reward function based on the position of the agent relative to the right lane 31
CHAPTER 4 TRAINING PROCESS 33
4.1 TRAINING PROCESS OF NON-END-TO-END MODEL 34
4.2 TRAINING PROCESS OF END-TO-END MODEL 35
CHAPTER 5 RESULT 38
CHAPTER 6 TEST AND EVALUATION 41
6.1 EVALUATION OF END-TO-END MODEL 43
6.1.1 Speed Tests in Two Scenarios 43
6.1.2 Lateral Deviation between the Agent and the Right Lane’s Centerline 44
6.1.3 Orientation Deviation between the Agent and the Right Lane’s Centerline 45
6.2 COMPARISON OF THE END-TO-END MODEL TO TWO BASELINES IN SIMULATION 46
6.2.1 Comparison with Non-end-to-end Baseline 47
6.2.2 Comparison with PD Baseline 51
6.3 TEST THE EFFECT OF DIFFERENT WEIGHTS ASSIGNMENTS ON THE END-TO-END MODEL 53
CHAPTER 7 CONCLUSION 57 / Der Forscher entwickelte eine autonome Fahrsimulation, indem er ein End-to-End-Regelungsmodell mit Hilfe von Deep Reinforcement Learning-Algorithmen in der virtuellen Umgebung von Gym-duckietown trainierte. Die Kontrollstrategie des Modells wurde für die Aufgabe des Spurhaltens entwickelt. Es wurden mehrere Verstärkungslernalgorithmen implementiert, und der SAC-Algorithmus wurde ausgewählt, um ein Nicht-End-to-End-Modell mit den von der Umgebung bereitgestellten Informationen wie Geschwindigkeit als Eingabewerte sowie ein End-to-End-Modell mit den von der Frontkamera des Agenten aufgenommenen Bildern als Eingabe zu trainieren. In diesem Beitrag verglich der Forscher die Vor- und Nachteile der beiden Modelle unter Verwendung kinetischer Parameter in der Umgebung und führte eine Reihe von Experimenten zur Kontrollstrategie des End-to-End-Modells durch, um die Auswirkungen verschiedener Umgebungsparameter oder Belohnungsfunktionen auf die Modelle zu untersuchen.:CHAPTER 1 INTRODUCTION 1
1.1 AUTONOMOUS DRIVING OVERVIEW 1
1.2 RESEARCH QUESTIONS AND METHODS 3
1.2.1 Research Questions 3
1.2.2 Research Methods 4
1.3 PAPER STRUCTURE 5
CHAPTER 2 RESEARCH BACKGROUND 7
2.1 RESEARCH STATUS 7
2.2 THEORETICAL BASIS 8
2.2.1 Machine Learning 8
2.2.2 Deep Learning 9
2.2.3 Reinforcement Learning 11
2.2.4 Deep Reinforcement Learning 14
CHAPTER 3 METHOD 15
3.1 SIMULATION PLATFORM 16
3.2 CONTROL TASK 17
3.3 OBSERVATION SPACE 18
3.3.1 Information as Observation (Non-end-to-end) 19
3.3.2 Images as Observation (End-to-end) 20
3.4 ACTION SPACE 22
3.5 ALGORITHM 23
3.5.1 Mathematical Foundations 23
3.5.2 Policy Iteration 25
3.6 POLICY ARCHITECTURE 25
3.6.1 Network Architecture for Non-end-to-end Model 26
3.6.2 Network Architecture for End-to-end Model 28
3.7 REWARD SHAPING 29
3.7.1 Calculation of Speed-based Reward Function 30
3.7.2 Calculation of the reward function based on the position of the agent relative to the right lane 31
CHAPTER 4 TRAINING PROCESS 33
4.1 TRAINING PROCESS OF NON-END-TO-END MODEL 34
4.2 TRAINING PROCESS OF END-TO-END MODEL 35
CHAPTER 5 RESULT 38
CHAPTER 6 TEST AND EVALUATION 41
6.1 EVALUATION OF END-TO-END MODEL 43
6.1.1 Speed Tests in Two Scenarios 43
6.1.2 Lateral Deviation between the Agent and the Right Lane’s Centerline 44
6.1.3 Orientation Deviation between the Agent and the Right Lane’s Centerline 45
6.2 COMPARISON OF THE END-TO-END MODEL TO TWO BASELINES IN SIMULATION 46
6.2.1 Comparison with Non-end-to-end Baseline 47
6.2.2 Comparison with PD Baseline 51
6.3 TEST THE EFFECT OF DIFFERENT WEIGHTS ASSIGNMENTS ON THE END-TO-END MODEL 53
CHAPTER 7 CONCLUSION 57
|
Page generated in 0.053 seconds