41 |
A Statistical Clinical Decision Support Tool for Determining Thresholds in Remote Monitoring Using Predictive AnalyticsJanuary 2013 (has links)
abstract: Statistical process control (SPC) and predictive analytics have been used in industrial manufacturing and design, but up until now have not been applied to threshold data of vital sign monitoring in remote care settings. In this study of 20 elders with COPD and/or CHF, extended months of peak flow monitoring (FEV1) using telemedicine are examined to determine when an earlier or later clinical intervention may have been advised. This study demonstrated that SPC may bring less than a 2.0% increase in clinician workload while providing more robust statistically-derived thresholds than clinician-derived thresholds. Using a random K-fold model, FEV1 output was predictably validated to .80 Generalized R-square, demonstrating the adequate learning of a threshold classifier. Disease severity also impacted the model. Forecasting future FEV1 data points is possible with a complex ARIMA (45, 0, 49), but variation and sources of error require tight control. Validation was above average and encouraging for clinician acceptance. These statistical algorithms provide for the patient's own data to drive reduction in variability and, potentially increase clinician efficiency, improve patient outcome, and cost burden to the health care ecosystem. / Dissertation/Thesis / Ph.D. Engineering 2013
|
42 |
Transitioning Business Intelligence from reactive to proactive decision-making systems : A qualitive usability study based on Technology Acceptance ModelAbormegah, Jude Edem, Bahadin Tarik, Dashti January 2020 (has links)
Nowadays companies are in a dynamic environment leading to competition in finding new revenue streams to strengthen their positions in their markets by using new technologies to provide capabilitiesto organize resources whilst taking into account changes that can occur in their environment. Therefore, decision making is inevitable to combat uncertainties where taking the optimal action by leveraging concepts and technologies that support decision making such as Business Intelligence (BI)tools and systems could determine a company’s future. Companies can optimize their decision making with BI features like Data-Driven Alerts that sends messages when fluctuations occur within a supervised threshold that reflects the state of business operations. The purpose of this research was to conduct an empirical study on how Swedish companies and enterprises located in different industries apply BI tools and with Data-driven Alerts features for decision making whereby we further studied the characteristics of Data-driven Alerts in terms of usability from the perspectives of different industry professionals through the thematic lens of the Technology acceptance model (TAM) in a qualitative approach. We conducted interviews with professionals from diverse organizations where we applied the Thematic Coding technique on empirical results for further analysis. We found out that by allowing possibilities for users to analyze data in their own preferences for decisions, it will provide managers and leaders with sufficient information needed to empower strategic and tactical decision-making. Despite the emergence of state-of-the-art predictive analytics technologies such as Machine Learning and AI, the literature clearly states that these processes are technical and complex to be comprehended by the decision maker. At the end of the day, prescriptive analytics will end up providing descriptive options being presented to the end user as we move towards automated decision making. This we see as an opportunity for reporting tools and data-driven alerts to be in contemporary symbiotic relationship with advanced analytics in decision making contexts to improve its outcome, quality and user friendliness.
|
43 |
A qualitative analysis to investigate the enablers of big data analytics that impacts sustainable supply chain / Investigation qualitative des facteurs qui permettent l’analyse de Big Data et la chaîne d’approvisionnementRodriguez Pellière, Lineth Arelys 27 August 2019 (has links)
Les académiques et les professionnels ont déjà montré que le Big Data et l'analyse prédictive, également connus dans la littérature sous le nom de BDPA, peuvent jouer un rôle fondamental dans la transformation et l'amélioration des fonctions de l'analyse de la chaîne d'approvisionnement durable (SSCA). Cependant, les connaissances sur la meilleure manière d'utiliser la BDPA pour augmenter simultanément les performances sociales, environnementale et financière. Par conséquent, avec les connaissances tirées de la littérature sur la SSCA, il semble que les entreprises peinent encore à mettre en oeuvre les pratiques de la SSCA. Les chercheursconviennent qu'il est encore nécessaire de comprendre les techniques, outils et facteurs des concepts de base de la SSCA pour adoption. C’est encore plus important d’intégrer BDPA en tant qu’atout stratégique dans les activités commerciales. Par conséquent, cette étude examine, par exemple, quels sont les facteurs de SSCA et quels sont les outils et techniques de BDPA qui permettent de mettre en évidence le 3BL (pour ses abréviations en anglais : "triple bottom line") des rendements de durabilité (environnementale, sociale et financière) via SCA.La thèse a adopté un constructionniste modéré, car elle comprend l’impact des facteurs Big Data sur les applications et les indicateurs de performance de la chaîne logistique analytique et durable. La thèse a également adopté un questionnaire et une étude de cas en tant que stratégie de recherche permettant de saisir les différentes perceptions des personnes et des entreprises dans l'application des mégadonnées sur la chaîne d'approvisionnement analytique et durable. La thèse a révélé une meilleure vision des facteurs pouvant influencer l'adoption du Big Data dans la chaîne d'approvisionnement analytique et durable. Cette recherche a permis de déterminer les facteurs en fonction des variables ayant une incidence sur l'adoption de BDPA pour SSCA, des outils et techniques permettant la prise de décision via SSCA et du coefficient de chaque facteur pour faciliter ou retarder l'adoption de la durabilité. Il n'a pas été étudié avant. Les résultats de la thèse suggèrent que les outils actuels utilisés par les entreprises ne peuvent pas analyser de grandes quantités de données par eux-mêmes. Les entreprises ont besoin d'outils plus appropriés pour effectuer ce travail. / Scholars and practitioners already shown that Big Data and Predictive Analytics also known in the literature as BDPA can play a pivotal role in transforming and improving the functions of sustainable supply chain analytics (SSCA). However, there is limited knowledge about how BDPA can be best leveraged to grow social, environmental and financial performance simultaneously. Therefore, with the knowledge coming from literature around SSCA, it seems that companies still struggled to implement SSCA practices. Researchers agree that is still a need to understand the techniques, tools, and enablers of the basics SSCA for its adoption; this is even more important to integrate BDPA as a strategic asset across business activities. Hence, this study investigates, for instance, what are the enablers of SSCA, and what are the tools and techniques of BDPA that enable the triple bottom line (3BL) of sustainability performances through SCA. The thesis adopted moderate constructionism since understanding of how the enablers of big data impacts sustainable supply chain analytics applications and performances. The thesis also adopted a questionnaire and a case study as a research strategy in order to capture the different perceptions of the people and the company on big data application on sustainable supply chain analytics. The thesis revealed a better insight of the factors that can affect in the adoption of big data on sustainable supply chain analytics. This research was capable to find the factors depending on the variable loadings that impact in the adoption of BDPA for SSCA, tools and techniques that enable decision making through SSCA, and the coefficient of each factor for facilitating or delaying sustainability adoption that wasn’t investigated before. The findings of the thesis suggest that the current tools that companies are using by itself can’t analyses data. The companies need more appropriate tools for the data analysis.
|
44 |
Sales forecasting for supply chain using Artificial Intelligence / försäljningsprognoser för försörjningskedjan använder artificiell intelligensMittal, Vaibhav January 2023 (has links)
Supply chain management and logistics are two sectors currently experiencing a transformation thanks to the advent of AI(Artificial Intelligence) technologies. Leveraging predictive analytics powered by AI presents businesses with novel opportunities to streamline their operations effectively. This study utilizes sales forecasting for predictive analysis using three distinct artificial intelligence paradigms : Long Short-term Memory (LSTM), Bayesian Neural Networks (BNN) – both of which belong to the family of deep learning models – and Support Vector Regressors (SVR), a machine learning technique. The empirical data employed for this forecast stems from the historical sales data of Bactiguard, the collaborating company in this study. Subsequent to the essential data manipulation, these models are trained, and their respective results are assessed. The evaluation matrices incorporated in this study include the mean absolute error (MAE), root mean square error (RMSE), and the R2 score. Upon analysis, the LSTM model emerges as the clear frontrunner, exhibiting the lowest error rates and the highest R2 score. The BNN follows closely, demonstrating credible performance, while the SVR lags, presenting suboptimal results. In conclusion, this study highlights the accuracy and efficiency of artificial intelligence models in sales forecasting and underscores their practical, real-world applications. / Supply chain management och logistik är två sektorer som för närvarande genomgår en förändring tack vare tillkomsten av AI(artificiell intelligens) teknik. Att utnyttja prediktiv analys som drivs av AI ger företag nya möjligheter att effektivisera sin verksamhet. Denna studie använder försäljningsprognoser för prediktiv analys med hjälp av tre distinkta artificiell intelligensparadigm: Long Short-term Memory (LSTM), Bayesian Neural Networks (BNN) - som båda tillhör familjen av modeller för djupinlärning - och Support Vector Regressors ( SVR), en maskininlärningsteknik. Den empiriska data som används för denna prognos härrör från historiska försäljningsdata från Bactiguard, det samarbetande företaget i denna studie. Efter den väsentliga datamanipulationen tränas dessa modeller och deras respektive resultat utvärderas. De utvärderingsmatriser som ingår i denna studie inkluderar det genomsnittliga absoluta felet (MAE), root mean square error (RMSE) och R2-poängen. Vid analys framstår LSTM-modellen som den tydliga föregångaren, som uppvisar de lägsta felfrekvenserna och den högsta R2- poängen. BNN följer noga och visar trovärdig prestanda, medan SVR släpar efter och ger suboptimala resultat. Sammanfattningsvis belyser denna studie noggrannheten och effektiviteten hos modeller med artificiell intelligens i försäljningsprognoser och understryker deras praktiska tillämpningar i verkligheten.
|
45 |
Управление бизнес-процессами спортивных клубов путем внедрения системы SAP Sports One : магистерская диссертация / Management of business processes of sports clubs by implementing the SAP Sports One systemУткин, И. А., Utkin, I. A. January 2022 (has links)
Актуальность темы обусловлена потребностью футбольного клуба «Футбольный Клуб» в автоматизации процессов, протекающих в клубе, для увеличения эффективности работы персонала футбольного клуба, сокращения временных и ресурсных затрат, а также улучшения результатов выступлений команды в различных чемпионатах. Цель работы: автоматизация процессов, протекающих во время спортивной деятельности клуба, путем внедрения системы управления спортивными клубами SAP Sports One. Объектом исследования данной выпускной работы является система управления спортивными клубами SAP Sports One. Предметом исследования является бизнес-процесс прогнозной аналитики предстоящих матчей футбольного клуба. Научная значимость заключается в оценке эффективности использования методологии «Agile» для процесса внедрения системы SAP Sports One с помощью моделей системной динамики. Также в работе было проведено сравнение альтернатив для внедрения через метод анализа иерархий. Практическая значимость заключается в автоматизации процессов, протекающих во время спортивной деятельности клуба, путем внедрения системы управления спортивными клубами SAP Sports One. Также, 2 раздел данной работы может использоваться как методические указания к изучению системы SAP Sports One в различных дисциплинах. Также переход к методологии внедрения «Agile» может быть использован в будущих проектах внедрения. / The relevance of the topic is due to the need of the football club "Football Club" to automate the processes occurring in the club in order to increase the efficiency of the football club's staff, reduce time and resource costs, as well as improve the results of the team's performance in various championships. Purpose of the work: automation of processes occurring during the sports activities of the club by implementing the SAP Sports One sports club management system. The object of study of this graduate work is the SAP Sports One sports club management system. The subject of the study is the business process of predictive analytics for the upcoming matches of a football club. The scientific significance lies in evaluating the effectiveness of using the Agile methodology for the process of implementing the SAP Sports One system using system dynamics models. Also in the work was a comparison of alternatives for implementation through the method of analysis of hierarchies. The practical significance lies in automating the processes occurring during the club's sports activities through the implementation of the SAP Sports One sports club management system. Also, Section 2 of this work can be used as guidelines for studying the SAP Sports One system in various disciplines. Also, the transition to the Agile implementation methodology can be used in future implementation projects.
|
46 |
The iterative frame : algorithmic video editing, participant observation & the black boxRapoport, Robert S. January 2016 (has links)
Machine learning is increasingly involved in both our production and consumption of video. One symptom of this is the appearance of automated video editing applications. As this technology spreads rapidly to consumers, the need for substantive research about its social impact grows. To this end, this project maintains a focus on video editing as a microcosm of larger shifts in cultural objects co-authored by artificial intelligence. The window in which this research occurred (2010-2015) saw machine learning move increasingly into the public eye, and with it ethical concerns. What follows is, on the most abstract level, a discussion of why these ethical concerns are particularly urgent in the realm of the moving image. Algorithmic editing consists of software instructions to automate the creation of timelines of moving images. The criteria that this software uses to query a database is variable. Algorithmic authorship already exists in other media, but I will argue that the moving image is a separate case insofar as the raw material of text and music software can develop on its own. The performance of a trained actor can still not be generated by software. Thus, my focus is on the relationship between live embodied performance, and the subsequent algorithmic editing of that footage. This is a process that can employ other software like computer vision (to analyze the content of video) and predictive analytics (to guess what kind of automated film to make for a given user). How is performance altered when it has to communicate to human and non-human alike? The ritual of the iterative frame gives literal form to something that throughout human history has been a projection: the omniscient participant observer, more commonly known as the Divine. We experience black boxed software (AI's, specifically neural networks, which are intrinsically opaque) as functionally omniscient and tacitly allow it to edit more and more of life (e.g. filtering articles, playlists and even potential spouses). As long as it remains disembodied, we will continue to project the Divine on to the black box, causing cultural anxiety. In other words, predictive analytics alienate us from the source code of our cultural texts. The iterative frame then is a space in which these forces can be inscribed on the body, and hence narrated. The algorithmic editing of content is already taken for granted. The editing of moving images, in contrast, still requires a human hand. We need to understand the social power of moving image editing before it is delegated to automation. Practice Section: This project is practice-led, meaning that the portfolio of work was produced as it was being theorized. To underscore this, the portfolio comes at the end of the document. Video editors use artificial intelligence (AI) in a number of different applications, from deciding the sequencing of timelines to using facial and language detection to find actors in archives. This changes traditional production workflows on a number of levels. How can the single decision cut a between two frames of video speak to the larger epistemological shifts brought on by predictive analytics and Big Data (upon which they rely)? When predictive analytics begin modeling the world of moving images, how will our own understanding of the world change? In the practice-based section of this thesis, I explore how these shifts will change the way in which actors might approach performance. What does a gesture mean to AI and how will the editor decontextualize it? The set of a video shoot that will employ an element of AI in editing represents a move towards ritualization of production, summarized in the term the 'iterative frame'. The portfolio contains eight works that treat the set was taken as a microcosm of larger shifts in the production of culture. There is, I argue, metaphorical significance in the changing understanding of terms like 'continuity' and 'sync' on the AI-watched set. Theory Section In the theoretical section, the approach is broadly comparative. I contextualize the current dynamic by looking at previous shifts in technology that changed the relationship between production and post-production, notably the lightweight recording technology of the 1960s. This section also draws on debates in ethnographic filmmaking about the matching of film and ritual. In this body of literature, there is a focus on how participant observation can be formalized in film. Triangulating between event, participant observer and edit grammar in ethnographic filmmaking provides a useful analogy in understanding how AI as film editor might function in relation to contemporary production. Rituals occur in a frame that is dependent on a spatially/temporally separate observer. This dynamic also exists on sets bound for post-production involving AI, The convergence of film grammar and ritual grammar occurred in the 1960s under the banner of cinéma vérité in which the relationship between participant observer/ethnographer and the subject became most transparent. In Rouch and Morin's Chronicle of a Summer (1961), reflexivity became ritualized in the form of on-screen feedback sessions. The edit became transparent-the black box of cinema disappeared. Today as artificial intelligence enters the film production process this relationship begins to reverse-feedback, while it exists, becomes less transparent. The weight of the feedback ritual gets gradually shifted from presence and production to montage and post-production. Put differently, in cinéma vérité, the participant observer was most present in the frame. As participant observation gradually becomes shared with code it becomes more difficult to give it an embodied representation and thus its presence is felt more in the edit of the film. The relationship between the ritual actor and the participant observer (the algorithm) is completely mediated by the edit, a reassertion of the black box, where once it had been transparent. The crucible for looking at the relationship between algorithmic editing, participant observation and the black box is the subject in trance. In ritual trance the individual is subsumed by collective codes. Long before the advent of automated editing trance was an epistemological problem posed to film editing. In the iterative frame, for the first time, film grammar can echo ritual grammar and indeed become continuous with it. This occurs through removing the act of cutting from the causal world, and projecting this logic of post-production onto performance. Why does this occur? Ritual and specifically ritual trance is the moment when a culture gives embodied form to what it could not otherwise articulate. The trance of predictive analytics-the AI that increasingly choreographs our relationship to information-is the ineffable that finds form in the iterative frame. In the iterative frame a gesture never exists in a single instance, but in a potential state. The performers in this frame begin to understand themselves in terms of how automated indexing processes reconfigure their performance. To the extent that gestures are complicit with this mode of databasing they can be seen as votive toward the algorithmic. The practice section focuses on the poetics of this position. Chapter One focuses on cinéma vérité as a moment in which the relationship between production and post-production shifted as a function of more agile recording technology, allowing the participant observer to enter the frame. This shift becomes a lens to look at changes that AI might bring. Chapter Two treats the work of Pierre Huyghe as a 'liminal phase' in which a new relationship between production and post-production is explored. Finally, Chapter Three looks at a film in which actors perform with awareness that footage will be processed by an algorithmic edit. / The conclusion looks at the implications this way of relating to AI-especially commercial AI-through embodied performance could foster a more critical relationship to the proliferating black-boxed modes of production.
|
47 |
Design of information tree for support related queries: Axis Communications AB : An exploratory research study in debug suggestions with machine learning at Axis Communications, Lund / Utformning av informationsträd för supportrelaterade frågor: Axis Communications AB : En utforskande forskningsstudie i felsökningsförslag med maskininlärning vid Axis Communications, LundRangavajjula, Santosh Bharadwaj January 2017 (has links)
Context: In today's world, we have access to so much data than at any time in the past with more and more data coming from smartphones, sensors networks, and business processes. But, most of this data is meaningless, if it's not properly formatted and utilized. Traditionally, in service support teams, issues raised by customers are processed locally, made reports and sent over in the support line for resolution. The resolution of the issue then depends on the expertise of the technicians or developers and their experience in handling similar issues which limits the size, speed, and scale of the problems that can be resolved. One solution to this problem is to make relevant information tailored to the issue under investigation to be easily available. Objectives: The focus of the thesis is to improve turn around time of customer queries using recommendations and evaluate by defining metrics in comparison to existing workflow. As Artificial Intelligence applications can have a broad spectrum, we confine the scope with a relevance in software service and Issue Tracking Systems. Software support is a complicated process as it involves various stakeholders with conflicting interests. During the course of this literary work, we are primarily interested in evaluating different AI solutions specifically in the customer support space customize and compare them. Methods: The following thesis work has been carried out by making controlled experiments using different datasets and Machine learning models. Results: We classified Axis data and Bugzilla (eclipse) using Decision Trees, K Nearest Neighbors, Neural Networks, Naive Bayes and evaluated them using precision, recall rate, and F-score. K Nearest Neighbors was having precision 0.11, recall rate 0.11, Decision Trees had precision 0.11, recall rate 0.11, Neural Networks had precision 0.13, recall rate 0.11 and Naive Bayes had precision 0.05, recall rate 0.11. The result shows too many false positives and true negatives for being able to recommend. Conclusions: In this Thesis work, we have gone through 33 research articles and synthesized them. Existing systems in place and the current state of the art is described. A debug suggestion tool was developed in python with SKlearn. Experiments with different Machine Learning models are run on the tool and highest 0.13 (precision), 0.10 (f-score), 0.11 (recall) are observed with MLP Classification Neural Network.
|
48 |
Prediktivní analýza - postup a tvorba prediktivních modelů / Predictive Analytics - Process and Development of Predictive ModelsPraus, Ondřej January 2013 (has links)
This master's degree thesis focuses on predictive analytics. This type of analysis uses historical data and predictive models to predict future phenomenon. The main goal of this thesis is to describe predictive analytics and its process from theoretical as well as practical point of view. Secondary goal is to implement project of predictive analytics in an important insurance company operating in the Czech market and to improve the current state of detection of fraudulent insurance claims. Thesis is divided into theoretical and practical part. The process of predictive analytics and selected types of predictive models are described in the theoretical part of the thesis. Practical part describes the implementation of predictive analytics in a company. First described are techniques of data organization used in datamart development. Predictive models are then implemented based on the data from the prepared datamart. Thesis includes examples and problems with their solutions. The main contribution of this thesis is the detailed description of the project implementation. The field of the predictive analytics is better understandable thanks to the level of detail. Another contribution of successfully implemented predictive analytics is the improvement of the detection of fraudulent insurance claims.
|
49 |
A Technological Solution to Identify the Level of Risk to Be Diagnosed with Type 2 Diabetes Mellitus Using WearablesNuñovero, Daniela, Rodríguez, Ernesto, Armas, Jimmy, Gonzalez, Paola 01 January 2021 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / This paper proposes a technological solution using a predictive analysis model to identify and reduce the level of risk for type 2 diabetes mellitus (T2DM) through a wearable device. Our proposal is based on previous models that use the auto-classification algorithm together with the addition of new risk factors, which provide a greater contribution to the results of the presumptive diagnosis of the user who wants to check his level of risk. The purpose is the primary prevention of type 2 diabetes mellitus by a non-invasive method composed of the phases: (1) Capture and storage of risk factors; (2) Predictive analysis model; (3) Presumptive results and recommendations; and (4) Preventive treatment. The main contribution is in the development of the proposed application. / Revisión por pares
|
50 |
Desarrollo de un APP basado en modelos predictivos y tecnología GPS para la recomendación y reservas de espacios en playas de estacionamientoDiaz Mantilla, Francisco Alberto, Ocampo Hidalgo, Alexander 01 June 2021 (has links)
El presente proyecto de tesis reúne la información del proceso y los sub procesos involucrados en la gestión de venta de eventuales Central Parking Systems S.A., así mismo, las nuevas ideas propuestas para la mejora de los procesos en el campo de Acción.
En el capítulo 1, se expondrá el marco teórico el cual estará constituido por los fundamentos teóricos del negocio y las tendencias y tecnologías actuales, también se presentará a la organización, su estructura organizacional, sus objetivos estratégicos y sus procesos. Para cerrar el capítulo 1 se expondrá su problemática.
En el capítulo 2, se presentará los objetivos del proyecto tanto generales como específicos junto a sus respectivos fundamentos. Se expondrá los distintos beneficios que traerá consigo el proyecto. Para terminar, se presentará la evaluación comparativa de nuestra solución versus 3 otras soluciones y un detallado análisis.
En el capítulo 3, se expondrá todo el modelado de negocio, tendremos los distintos artefactos del UML como el diagrama de casos de uso del negocio, los distintos actores y trabajadores de negocio para finalizar se presentará las especificaciones de casos de uso asociadas a sus reglas de negocio y representados por su diagrama de actividades.
En el capítulo 4, se abordarán los temas de análisis de sistema, se presentarán los distintos requerimientos funcionales y no funcionales, como estos requerimientos funcionales son agrupados en distintos paquetes y casos de uso de sistema.
En el capítulo 5, donde se definirá la arquitectura de software, es en este capítulo donde se plantean las metas y restricciones de la arquitectura, se plantean las distintas mecánicas que darán soporte de la arquitectura, por último, se tiene las distintas vistas (lógica, implementación, despliegue).
En el capítulo 6, podremos apreciar la construcción de los patrones de la solución propuesta, así como el diagrama de patrones y el diccionario de datos
En el capítulo 7, se definirá la calidad de software como nuestro plan de pruebas. Estos planes son la parte fundamental del software, brindaran los lineamientos a seguir para poder garantizar la calidad y buen funcionamiento de nuestro sistema
En el capítulo 8, abordaremos temas de la gestión del proyecto, se presentará a los principales interesados, la EDT y por último el cronograma de trabajo en un Gantt. / This thesis project gathers the information of the process and the sub processes involved in the management of the sale of eventual Central Parking Systems S.A., as well as the new ideas proposed for the improvement of the processes in the field of Action.
In chapter 1, the theoretical framework will be exposed, which will be constituted by the theoretical foundations of the business and current trends and technologies, it will also be presented to the organization, its organizational structure, its strategic objectives and its processes. To close chapter 1, its problems will be exposed.
In Chapter 2, both general and specific project objectives will be presented along with their respective rationale. The different benefits that the project will bring will be exposed. Finally, the benchmarking of our solution versus 3 other solutions and a detailed analysis will be presented.
In chapter 3, all the business modeling will be exposed, we will have the different UML artifacts such as the business use case diagram, the different actors and business workers, to finish, the use case specifications associated with their rules will be presented. of business and represented by its activity diagram.
In Chapter 4, the issues of system analysis will be addressed, the different functional and non-functional requirements will be presented, as these functional requirements are grouped into different packages and system use cases.
In chapter 5, where the software architecture will be defined, it is in this chapter where the goals and constraints of the architecture are raised, the different mechanics that will support the architecture are raised, finally, the different views (logic, implementation, deployment).
In chapter 6, we will be able to appreciate the construction of the patterns of the proposed solution, as well as the pattern diagram and the data dictionary
In Chapter 7, we will define software quality as our test plan. These plans are the fundamental part of the software, they will provide the guidelines to follow in order to guarantee the quality and proper functioning of our system.
In Chapter 8, we will address project management issues, introduce key stakeholders, the WBS, and finally the work schedule in a Gantt. / Tesis
|
Page generated in 0.0467 seconds