• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 18
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 274
  • 116
  • 65
  • 56
  • 49
  • 47
  • 47
  • 44
  • 43
  • 38
  • 31
  • 29
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

"Interest rate optimization for consumer credits: Empirical evidence from an online Channel"

Lavandero Ivelic, Martín Carlos January 2019 (has links)
Memoria para optar al título de Ingeniero Civil Industrial / 18/03/2024
82

SYSTEMS SUPPORT FOR DATA ANALYTICS BY EXPLOITING MODERN HARDWARE

Hongyu Miao (11751590) 03 December 2021 (has links)
<p>A large volume of data is continuously being generated by data centers, humans, and the internet of things (IoT). In order to get useful insights, such enormous data must be processed in time with high throughput, low latency, and high accuracy. To meet such performance demands, a large body of new hardware is being shipped by vendors, such as multi-core CPUs, 3D-stacked memory, embedded microcontrollers, and other accelerators.</p><br><p>However, traditional operating systems (OSes) and data analytics frameworks, the key layer that bridges high-level data processing applications and low-level hardware, fails to deliver these requirements due to quickly evolving new hardware and increases in explosion of data. For instance, general OSes are not aware of the unique characters and demands of data processing applications. Data analytics engines for stream processing, e.g., Apache Spark and Beam, always add more machines to deal with more data but leave every single machine underutilized without fully exploiting underlying hardware features, which leads to poor efficiency. Data analytics frameworks for machine learning inference on IoT devices cannot run neural networks that exceed SRAM size, which disqualifies many important use cases.</p><br><p>In order to bridge the gap between the performance demands of data analytics and the new features of emerging hardware, in this thesis we exploit runtime system designs for high-level data processing applications by exploiting low-level modern hardware features. We study two important data analytics applications, including real-time stream processing and on-device machine learning inference, on three important hardware platforms across the Cloud and the Edge, including multicore CPUs, hybrid memory system combining 3D-stacked memory and general DRAM, and embedded microcontrollers with limited resources. </p><br><p>In order to speed up and enable the two data analytics applications on the three hardware platforms, this thesis contributes three related research projects. In project StreamBox, we exploit the parallelism and memory hierarchy of modern multicore hardware on single machines for stream processing, achieving scalable and highly efficient performance. In project StreamBox-HBM, we exploit hybrid memories to balance bandwidth and latency, achieving memory scalability and highly efficient performance. StreamBox and StreamBox-HBM both offer orders of magnitude performance improvements over the prior state of the art, opening up new applications with higher data processing needs. In project SwapNN, we investigate a system solution for microcontrollers (MCUs) to execute neural networks (NNs) inference out-of-core without losing accuracy, enabling new use cases and significantly expanding the scope of NN inference on tiny MCUs. </p><br><p>We report the system designs, system implementations, and experimental results. Based on our experience in building above systems, we provide general guidance on designing runtime systems across hardware/software stack for a wider range of new applications on future hardware platforms.</p><div><br></div>
83

HRD Professionals' Experience Utilizing Data Analytics in the Training Evaluation Process

Anthony E Randolph (11831450) 18 December 2021 (has links)
<p>In the past, Human Research Development (HRD) professionals have faced barriers of gaining access to the data they need to conduct higher level evaluations. However, recent technological innovations have presented opportunities for them to obtain this data, and consequently, apply new approaches for the training evaluation process. One approach being used is the application of data analytics. Because organizations have begun to embrace its use, recent research activities in the literature have focused on the promotion of analytics versus the practical application of analytics in the organization.<b> </b>This study investigated how HRD professionals utilize data analytics in the training evaluation process. It contributes to the body of research on the practical application of analytics in determining training effectiveness. The Unified Theory of Acceptance and Use of Technology (UTAUT) and Sociomateriality served as the theoretical framework for understanding how HRD professionals use data analytics in the training evaluation process. To address the research objective, a qualitative descriptive design was employed to investigate the phenomenon of lived experience, how HRD professionals use data analytics in the training evaluation process. Data were collected through semi-structured interviews with six (6) participants who were front and center in the organization’s transition to the analytics tool, Metrics That Matter (MTM), for evaluating training initiatives. The thematic analysis approach was applied. The study findings suggest three factors that influenced HR professionals to use human resource analytics, while revealing four ways they used those analytics in the training evaluation process. More importantly, findings from this study will provide training departments and HRD professionals recommendations for expanded job role and/or function descriptions, as well as best practices for incorporating data analytics in the training evaluation process.</p>
84

Turbine Generator Performance Dashboard for Predictive Maintenance Strategies

Emily R Rada (11813852) 19 December 2021 (has links)
<div>Equipment health is the root of productivity and profitability in a company; through the use of machine learning and advancements in computing power, a maintenance strategy known as Predictive Maintenance (PdM) has emerged. The predictive maintenance approach utilizes performance and condition data to forecast necessary machine repairs. Predicting maintenance needs reduces the likelihood of operational errors, aids in the avoidance of production failures, and allows for preplanned outages. The PdM strategy is based on machine-specific data, which proves to be a valuable tool. The machine data provides quantitative proof of operation patterns and production while offering machine health insights that may otherwise go unnoticed.</div><div><br> </div><div>Purdue University's Wade Utility Plant is responsible for providing reliable utility services for the campus community. The Wade Utility Plant has invested in an equipment monitoring system for a thirty-megawatt turbine generator. The equipment monitoring system records operational and performance data as the turbine generator supplies campus with electricity and high-pressure steam. Unplanned and surprise maintenance needs in the turbine generator hinder utility production and lessen the dependability of the system.</div><div><br> </div> The work of this study leverages the turbine generator data the Wade Utility Plant records and stores, to justify equipment care and provide early error detection at an in-house level. The research collects and aggregates operational, monitoring and performance-based data for the turbine generator in Microsoft Excel, creating a dashboard which visually displays and statistically monitors variables for discrepancies. The dashboard records ninety days of data, tracked hourly, determining averages, extrema, and alerting the user as data approaches recommended warning levels. Microsoft Excel offers a low-cost and accessible platform for data collection and analysis providing an adaptable and comprehensible collection of data from a turbine generator. The dashboard offers visual trends, simple statistics, and status updates using 90 days of user selected data. This dashboard offers the ability to forecast maintenance needs, plan work outages, and adjust operations while continuing to provide reliable services that meet Purdue University's utility demands. <br>
85

Data-Driven Decision Support Systems for Product Development - A Data Exploration Study Using Machine Learning

Aeddula, Omsri January 2021 (has links)
Modern product development is a complex chain of events and decisions. The ongoing digital transformation of society, increasing demands in innovative solutions puts pressure on organizations to maintain, or increase competitiveness. As a consequence, a major challenge in the product development is the search for information, analysis, and the build of knowledge. This is even more challenging when the design element comprises complex structural hierarchy and limited data generation capabilities. This challenge is even more pronounced in the conceptual stage of product development where information is scarce, vague, and potentially conflicting. The ability to conduct exploration of high-level useful information using a machine learning approach in the conceptual design stage would hence enhance be of importance to support the design decision-makers, where the decisions made at this stage impact the success of overall product development process. The thesis aims to investigate the conceptual stage of product development, proposing methods and tools in order to support the decision-making process by the building of data-driven decision support systems. The study highlights how the data can be utilized and visualized to extract useful information in design exploration studies at the conceptual stage of product development. The ability to build data-driven decision support systems in the early phases facilitates more informed decisions. The thesis presents initial descriptive study findings from the empirical studies, showing the capabilities of the machine learning approaches in extracting useful information, and building data-driven decision support systems. The thesis initially describes how the linear regression model and artificial neural networks extract useful information in design exploration, providing support for the decision-makers to understand the consequences of the design choices through cause-and-effect relationships on a detailed level. Furthermore, the presented approach also provides input to a novel visualization construct intended to enhance comprehensibility within cross-functional design teams. The thesis further studies how the data can be augmented and analyzed to extract the necessary information from an existing design element to support the decision-making process in an oral healthcare context.
86

Development of systemic methods to improve management techniques based on Balanced Scorecard in Manufacturing Environment. / Desarrollo de métodos sistémicos para la mejora de las técnicas de gestión basadas en el cuadro integral de mando en entornos de fabricación

Sánchez Márquez, Rafael 07 January 2020 (has links)
[ES] El "Balanced Scorecard" (BSC) como "Performance Management System" (PMS) se ha difundido por todo el mundo desde que Kaplan y Norton (1992) establecieron sus fundamentos teóricos. Kaplan (2009) afirmó que el uso del BSC y, especialmente, la conversión de estrategias en acciones era más un arte que una ciencia. La falta de evidencia de la existencia de relaciones de causa-efecto entre Key Performance Indicatiors (KPIs) de diferentes perspectivas y de métodos sólidos y científicos para su uso, eran algunas de las causas de sus problemas. Kaplan emplazó a la comunidad científica a confirmar los fundamentos del BSC y a desarrollar métodos científicos. Varios trabajos han intentado mejorar el uso del BSC. Algunos utilizan herramientas heurísticas, que tratan con variables cualitativas. Otros, métodos estadísticos y datos reales de KPI, pero aplicados a un período específico, que es una visión estática y que requiere muestras a largo plazo y recursos muy especializados cada vez que los ejecutivos necesitan evaluar el impacto de las estrategias. Esta tesis también aborda el retraso entre variables de "entrada" y de "salida", además de la falta de trabajos centrados en el entorno de fabricación, que constituye su objetivo principal. El primer objetivo de este trabajo es desarrollar una metodología para evaluar y seleccionar los principales KPI de salida, que explican el desempeño de toda la compañía. Usa las relaciones entre variables de diferentes dimensiones descritas por Kaplan. Este método también considera el retraso entre las variables. El resultado es un conjunto de KPI principales de salida, que resume todo el BSC, lo que reduce drásticamente su complejidad. El segundo objetivo es desarrollar una metodología gráfica que utilice ese conjunto de KPI principales de salida para evaluar la efectividad de las estrategias. Actualmente, los gráficos son comunes entre los profesionales, pero solo Breyfogle (2003) ha intentado distinguir entre un cambio real significativo y un cambio debido a la incertidumbre de usar muestras. Este trabajo desarrolla aún más el método de Breyfogle para abordar sus limitaciones. El tercer objetivo es desarrollar un método que, una vez demostrada gráficamente la efectividad de las estrategias, cuantifique su impacto en el conjunto de KPI principales de salida. 10 El cuarto y último método desarrollado se centra en el diagnóstico del sistema de gestión de la calidad para revelar cómo funciona en términos de las relaciones entre los KPI internos (dentro de la empresa) y externos (relacionados con el cliente) para mejorar la satisfacción del cliente. La aplicación de los cuatro métodos en la secuencia correcta constituye una metodología completa que se puede aplicar en cualquier empresa de fabricación para mejorar el uso del cuadro de mando integral como herramienta científica. Sin embargo, los profesionales pueden optar por aplicar solo uno de los cuatro métodos o una combinación de ellos, ya que la aplicación de cada uno de ellos es independiente y tiene sus propios objetivos y resultados. / [CAT] El "Balanced Scorecard" (BSC) com "Performance Management System" (PMS) s'ha difós per tot el món des que Kaplan i Norton (1992) van establir els seus fonaments teòrics. Kaplan (2009) va afirmar que l'ús del BSC i, especialment, la conversió d'estratègies en accions era més un art que una ciència. La manca d'evidència de l'existència de relacions de causa-efecte entre Key Performance Indicatiors (KPIs) de diferents perspectives i de mètodes sòlids i científics pel seu ús, eren algunes de les causes dels seus problemes. Kaplan va emplaçar a la comunitat científica a confirmar els fonaments del BSC i a desenvolupar mètodes científics. Diversos treballs han intentat millorar l'ús del BSC. Alguns utilitzen eines heurístiques, que tracten amb variables qualitatives. D'altres, mètodes estadístics i dades reals de KPI, però aplicats a un període específic, que és una visió estàtica i que requereix mostres a llarg termini i recursos molt especialitzats cada vegada que els executius necessiten avaluar l'impacte de les estratègies. Aquesta tesi també aborda el retard entre variables d ' "entrada" i de "eixida", a més de la manca de treballs centrats en l'entorn de fabricació, que és el seu objectiu principal. El primer objectiu d'aquest treball és desenvolupar una metodologia per avaluar i seleccionar els principals KPI d'eixida, que expliquen l'acompliment de tota la companyia. Es fa servir les relacions entre variables de diferents dimensions descrites per Kaplan. Aquest mètode també considera el retard entre les variables. El resultat és un conjunt de KPI principals d'eixida, que resumeix tot el BSC, i que redueix dràsticament la seua complexitat. El segon objectiu és desenvolupar una metodologia gràfica que utilitze aquest conjunt de KPI principals d'eixida per avaluar l'efectivitat de les estratègies. Actualment, els gràfics són comuns entre els professionals, però només Breyfogle (2003) ha intentat distingir entre un canvi real significatiu i un a causa de la incertesa d'utilitzar mostres. Aquest treball desenvolupa encara més el mètode de Breyfogle per abordar les seues limitacions. El tercer objectiu és desenvolupar un mètode que, una vegada demostrada gràficament l'efectivitat de les estratègies, quantifique el seu impacte en el conjunt de KPI principals d'exida. El quart i l'últim mètode es centra en el diagnòstic del sistema de gestió de la qualitat per a revelar com funcionen les relacions entre els KPI interns (dins de l'empresa) i externs (relacionats amb el client) per millorar la satisfacció del client. L'aplicació dels quatre mètodes en la seqüència correcta constitueix una metodologia completa que es pot aplicar en qualsevol empresa de fabricació per millorar l'ús del quadre de comandament integral com a eina científica. No obstant això, els professionals poden optar per aplicar només un dels quatre mètodes o una combinació d'ells, ja que l'aplicació de cada un d'ells és independent i té els seus propis objectius i resultats. / [EN] The Balanced Scorecard (BSC) as a Performance Management Method (PMS) has been spread worldwide since Kaplan and Norton (1992) established its theoretical foundations. Kaplan (2009) claimed that the use of the BSC and especially turning strategies into actions was more an art than a science. The lack of evidence of the existence of such cause and effect relationships between Key Performance Indicators (KPIs) from different perspectives and the lack of robust methods to use it as a scientific tool were some of the causes of its problems. Kaplan placed the scientific community to confirm the foundations of the BSC theory and to develop methods for its use as a scientific tool. Several works have attempted to enhance the use of the balanced scorecard. Some methods use heuristic tools, which deal with qualitative variables. Some others use statistical methods and actual KPIs data, but applied to a specific period, which is a static vision and needing long-term samples and expertise resources to apply advanced analytic methods each time executives need to assess the impact of strategies. This thesis also tackles the lag between "input" and "output" variables. Moreover, there is a lack of works focused on the manufacturing environment, which is its main objective. The first objective of this work is to develop a methodology to assess and select the main output KPIs, which explains the performance of the whole company. It is taking the advantage of the relationships between variables from different dimensions described by Kaplan. This method also considers the potential lag between variables. The result is a set of main output KPIs, which summarizes the whole BSC, thus dramatically reducing its complexity. The second objective is to develop a graphical methodology that uses that set of main output KPIs to assess the effectiveness of strategies. Currently, KPIs charts are common among practitioners, but only Breyfogle (2003) has attempted to distinguish between a significant actual change in the metrics and a change due to the uncertainty of using samples. This work further develops Breyfogle's method to tackle its limitations. The third objective is to develop a method that, once the effectiveness of those strategies and actions have been proved graphically, quantifies their impact on the set of main output KPIs. The ultimate goal was to develop a method that, using data analytics, will focus on the diagnosis of the quality management system to reveal how it works in terms of the relationships between internal (within the company) and external (costumer-related) KPIs to improve customer satisfaction. The application of the four methods in the right sequence makes up a comprehensive methodology that can be applied in any manufacturing company to enhance the use of the balanced scorecard as a scientific tool. However, professionals may choose to apply only one of the four methods or a combination of them, since the application of each of them is independent and has its own objectives and results. / Sánchez Márquez, R. (2019). Development of systemic methods to improve management techniques based on Balanced Scorecard in Manufacturing Environment [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/134022 / TESIS
87

Methods in intelligent transportation systems exploiting vehicle connectivity, autonomy and roadway data

Zhang, Yue 29 September 2019 (has links)
Intelligent transportation systems involve a variety of information and control systems methodologies, from cooperative systems which aim at traffic flow optimization by means of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, to information fusion from multiple traffic sensing modalities. This thesis aims to address three problems in intelligent transportation systems, one in optimal control of connected automated vehicles, one in discrete-event and hybrid traffic simulation model, and one in sensing and classifying roadway obstacles in smart cities. The first set of problems addressed relates to optimally controlling connected automated vehicles (CAVs) crossing an urban intersection without any explicit traffic signaling. A decentralized optimal control framework is established whereby, under proper coordination among CAVs, each CAV can jointly minimize its energy consumption and travel time subject to hard safety constraints. A closed-form analytical solution is derived while taking speed, control, and safety constraints into consideration. The analytical solution of each such problem, when it exists, yields the optimal CAV acceleration/deceleration. The framework is capable of accommodating for turns and ensures the absence of collisions. In the meantime, a measurement of passenger comfort is taken into account while the vehicles make turns. In addition to the first-in-first-out (FIFO) ordering structure, the concept of dynamic resequencing is introduced which aims at further increasing the traffic throughput. This thesis also studies the impact of CAVs and shows the benefit that can be achieved by incorporating CAVs to conventional traffic. To validate the effectiveness of the proposed solution, a discrete-event and hybrid simulation framework based on SimEvents is proposed, which facilitates safety and performance evaluation of an intelligent transportation system. The traffic simulation model enables traffic study at the microscopic level, including new control algorithms for CAVs under different traffic scenarios, the event-driven aspects of transportation systems, and the effects of communication delays. The framework spans multiple toolboxes including MATLAB, Simulink, and SimEvents. In another direction, an unsupervised anomaly detection system is developed based on data collected through the Street Bump smartphone application. The system, which is built based on signal processing techniques and the concept of information entropy, is capable of generating a prioritized list of roadway obstacles, such that the higher-ranked entries are most likely to be actionable bumps (e.g., potholes) requiring immediate attention, while those lower-ranked are most likely to be nonactionable bumps(e.g., flat castings, cobblestone streets, speed bumps) for which no immediate action is needed. This system enables the City to efficiently prioritize repairs. Results on an actual data set provided by the City of Boston illustrate the feasibility and effectiveness of the system in practice.
88

Modelo de madurez de analítica de datos para el sector financiero / Data Analytics Maturity Model for Financial Sector Companies

Perales Manrique, Jonathan Hernán, Molina Chirinos, Jorge Alonso 02 March 2020 (has links)
La analítica de datos permite a las organizaciones del sector financiero obtener una ventaja competitiva a través de procesos destinados a obtener datos, procesarlos y mostrarlos como información valiosa para comprender el comportamiento de sus clientes y estar preparados contra riesgos como el lavado de dinero, el fraude crediticio, entre otros. Sin embargo, las organizaciones no pueden identificar fácilmente las brechas relacionadas con el personal, los sistemas de información y los procesos comerciales que obstaculizan la mejora de su entorno de analítica de datos. En este contexto, los modelos de madurez evalúan, con base en criterios definidos, el estado actual de una organización e identifican su nivel de madurez para mejorar en función de los hallazgos. En este documento, se propone un modelo de madurez para identificar brechas en el entorno analítico de las compañías financieras que conducen a la reducción de estas. Este modelo incluye artefactos y criterios de evaluación centrados en tecnología, gobernanza, gestión de datos, cultura y analítica en sí, lo que proporciona un proceso de diagnóstico más amplio y estructurado con respecto al entorno analítico. El modelo propuesto se probó en tres empresas del sector financiero peruano y los resultados sugieren que los especialistas obtuvieron una perspectiva más clara que sus pensamientos iniciales sobre la situación del entorno analítico de sus empresas. / Data analytics allows organizations in the financial sector to gain a competitive advantage through processes aimed at obtaining data, processing them and displaying them as valuable information to understand the behavior of their clients and to be prepared against risks as money laundering, credit fraud, among others. However, organizations cannot easily identify gaps related to personnel, information systems and business processes that hinder the improvement of their data analytics environment. In this context, maturity models evaluate, based on defined criteria, the current state of an organization and identify its maturity level in order to improve based on the findings. In this paper, a maturity model is proposed to identify gaps in analytics environment of financial companies that lead to the reduction of these. This model includes artifacts and evaluation criteria focused on technology, governance, data management, culture and analytics itself, which gives a broader and structured diagnosis process with respect to the analytics environment. The proposed model was tested in three companies of Peruvian financial sector and the results suggest that the specialists obtained a clearer perspective than their initial thoughts on the situation of the analytics environment of their companies. / Tesis
89

Big Data Analytics of City Wide Building Energy Declarations

MA, YIXIAO January 2015 (has links)
This thesis explores the building energy performance of the domestic sector in the city of Stockholm based on the building energy declaration database. The aims of this master thesis are to analyze the big data sets of around 20,000 buildings in Stockholm region, explore the correlation between building energy performance and different internal and external affecting factors on building energy consumption, such as building energy systems, building vintages and etc. By using clustering method, buildings with different energy consumptions can be easily identified. Thereafter, energy saving potential is estimated by setting step-by-step target, while feasible energy saving solutions can also be proposed in order to drive building energy performance at city level. A brief introduction of several key concepts, energy consumption in buildings, building energy declaration and big data, serves as the background information, which helps to clarify the necessity of conducting this master thesis. The methods used in this thesis include data processing, descriptive analysis, regression analysis, clustering analysis and energy saving potential analysis. The provided building energy declaration data is firstly processed in MS Excel then reorganized in MS Access. As for the data analysis process, IBM SPSS is further introduced for the descriptive analysis and graphical representation. By defining different energy performance indicators, the descriptive analysis presents the energy consumption and composition for different building classifications. The results also give the application details of different ventilation systems in different building types. Thereafter, the correlation between building energy performance and five different independent variables is analyzed by using a linear regression model. Clustering analysis is further performed on studied buildings for the purpose of targeting low energy efficiency groups, and the buildings with various energy consumptions are well identified and grouped based on their energy performance. It proves that clustering method is quite useful in the big data analysis, however some parameters in the process of clustering needs to be further adjusted in order to achieve more satisfied results. Energy saving potential for the studied buildings is calculated as well. The conclusion shows that the maximal potential for energy savings in the studied buildings is estimated at 43% (2.35 TWh) for residential buildings and 54% (1.68 TWh) for non-residential premises, and the saving potential is calculated for different building categories and different clusters as well.
90

A Hybrid Infrastructure of Enterprise Architecture and Business Intelligence & Analytics to Empower Knowledge Management in Education

Moscoso-Zea, Oswaldo 09 May 2019 (has links)
The large volumes of data (Big Data) that are generated on a global scale and within organizations along with the knowledge that resides in people and in business processes makes organizational knowledge management (KM) very complex. A right KM can be a source of opportunities and competitive advantage for organizations that use their data intelligently and subsequently generate knowledge with them. Two of the fields that support KM and that have had accelerated growth in recent years are business intelligence (BI) and enterprise architecture (EA). On the one hand, BI allows taking advantage of the information stored in data warehouses using different operations such as slice, dice, roll-up, and drill-down. This information is obtained from the operational databases through an extraction, transformation, and loading (ETL) process. On the other hand, EA allows institutions to establish methods that support the creation, sharing and transfer of knowledge that resides in people and processes through the use of blueprints and models. One of the objectives of KM is to create a culture where tacit knowledge (knowledge that resides in a person) stays in an organization when qualified and expert personnel leave the institution or when changes are required in the organizational structure, in computer applications or in the technological infrastructure. In higher education institutions (HEIs) not having an adequate KM approach to handle data is even a greater problem due to the nature of this industry. Generally, HEIs have very little interdependence between departments and faculties. In other words, there is low standardization, redundancy of information, and constant duplicity of applications and functionalities in the different departments which causes inefficient organizations. That is why the research performed within this dissertation has focused on finding an adequate KM method and researching on the right technological infrastructure that supports the management of information of all the knowledge dimensions such as people, processes and technology. All of this with the objective to discover innovative mechanisms to improve education and the service that HEIs offer to their students and teachers by improving their processes. Despite the existence of some initiatives, and papers on KM frameworks, we were not able to find a standard framework that supports or guides KM initiatives. In addition, KM frameworks found in the literature do not present practical mechanisms to gather and analyze all the knowledge dimensions to facilitate the implementation of KM projects. The core contribution of this thesis is a hybrid infrastructure of KM based on EA and BI that was developed from research using an empirical approach and taking as reference the framework developed for KM. The proposed infrastructure will help HEIs to improve education in a general way by analyzing reliable and cleaned data and integrating analytics from the perspective of EA. EA analytics takes into account the interdependence between the objects that make up the organization: people, processes, applications, and technology. Through the presented infrastructure, the doors are opened for the realization of different research projects that increment the type of knowledge that is generated by integrating the information of the applications found in the data warehouses together with the information of the people and the organizational processes that are found in the EA repositories. In order to validate the proposal, a case study was carried out within a university with promising initial results. As future works, it is planned that different HEIs' activities can be automated through a software development methodology based on EA models. In addition, it is desired to develop a KM system that allows the generation of different and new types of analytics, which would be impossible to obtain with only transactional or multidimensional databases.

Page generated in 0.072 seconds