• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 619
  • 79
  • 63
  • 58
  • 34
  • 25
  • 24
  • 21
  • 10
  • 8
  • 7
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 1179
  • 540
  • 234
  • 216
  • 203
  • 188
  • 188
  • 172
  • 154
  • 152
  • 144
  • 140
  • 130
  • 127
  • 124
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Active Analytics: Suggesting Navigational Links to Users Based on Temporal Analytics Data

Koza, Jacob 01 January 2019 (has links)
Front-end developers are tasked with keeping websites up-to-date while optimizing user experiences and interactions. Tools and systems have been developed to give these individuals granular analytic insight into who, with what, and how users are interacting with their sites. These systems maintain a historical record of user interactions that can be leveraged for design decisions. Developing a framework to aggregate those historical usage records and using it to anticipate user interactions on a webpage could automate the task of optimizing web pages. In this research a system called Active Analytics was created that takes Google Analytics historical usage data and provides a dynamic front-end system for automatically updating web page navigational elements. The previous year’s data is extracted from Google Analytics and transformed into a summarization of top navigation steps. Once stored, a responsive front-end system selects from this data a timespan of three weeks from the previous year: current, previous and next. The most frequently reached pages, or their parent pages, will have their navigational UI elements highlighted on a top-level or landing page to attempt to reduce the effort to reach those pages. The Active Analytics framework was evaluated by eliciting volunteers by randomly assigning two versions of a site, one with the framework, one without. It was found that users of the framework-enabled site were able to navigate a site more easily than the original.
152

How to capture that business value everyone talks about? : An exploratory case study on business value in agile big data analytics organizations

Svenningsson, Philip, Drubba, Maximilian January 2020 (has links)
Background: Big data analytics has been referred to as a hype the past decade, making manyorganizations adopt data-driven processes to stay competitive in their industries. Many of theorganizations adopting big data analytics use agile methodologies where the most importantoutcome is to maximize business value. Multiple scholars argue that big data analytics lead toincreased business value, however, there is a theoretical gap within the literature about how agileorganizations can capture this business value in a practically relevant way. Purpose: Building on a combined definition that capturing business value means being able todefine-, communicate- and measure it, the purpose of this thesis is to explore how agileorganizations capture business value from big data analytics, as well as find out what aspects ofvalue are relevant when defining it. Method: This study follows an abductive research approach by having a foundation in theorythrough the use of a qualitative research design. A single case study of Nike Inc. was conducted togenerate the primary data for this thesis where nine participants from different domains within theorganization were interviewed and the results were analysed with a thematic content analysis. Findings: The findings indicate that, in order for agile organizations to capture business valuegenerated from big data analytics, they need to (1) define the value through a synthezised valuemap, (2) establish a common language with the help of a business translator and agile methods,and (3), measure the business value before-, during- and after the development by usingindividually idenified KPIs derived from the business value definition.
153

Analyzing Small Businesses' Adoption of Big Data Security Analytics

Mathias, Henry 01 January 2019 (has links)
Despite the increased cost of data breaches due to advanced, persistent threats from malicious sources, the adoption of big data security analytics among U.S. small businesses has been slow. Anchored in a diffusion of innovation theory, the purpose of this correlational study was to examine ways to increase the adoption of big data security analytics among small businesses in the United States by examining the relationship between small business leaders' perceptions of big data security analytics and their adoption. The research questions were developed to determine how to increase the adoption of big data security analytics, which can be measured as a function of the user's perceived attributes of innovation represented by the independent variables: relative advantage, compatibility, complexity, observability, and trialability. The study included a cross-sectional survey distributed online to a convenience sample of 165 small businesses. Pearson correlations and multiple linear regression were used to statistically understand relationships between variables. There were no significant positive correlations between relative advantage, compatibility, and the dependent variable adoption; however, there were significant negative correlations between complexity, trialability, and the adoption. There was also a significant positive correlation between observability and the adoption. The implications for positive social change include an increase in knowledge, skill sets, and jobs for employees and increased confidentiality, integrity, and availability of systems and data for small businesses. Social benefits include improved decision making for small businesses and increased secure transactions between systems by detecting and eliminating advanced, persistent threats.
154

Towards Prescriptive Analytics Systems in Healthcare Delivery: AI-Transformation to Improve High Volume Operating Rooms Throughput

Al Zoubi, Farid 06 February 2024 (has links)
The increasing demand for healthcare services, coupled with the challenges of managing budgets and navigating complex regulations, has underscored the need for sustainable and efficient healthcare delivery. In response to this pressing issue, this thesis aims to optimize hospital efficiency using Artificial Intelligence (AI) techniques. The focus extends beyond improving surgical intraoperative time to encompass preoperative and postoperative periods as well. The research presents a novel Prescriptive Analytics System (PAS) designed to enhance the Surgical Success Rate (SSR) in surgeries and specifically in high volume arthroplasty. The SSR is a critical metric that reflects the successful completion of 4-surgeries during an 8-hour timeframe. By leveraging AI, the developed PAS has the potential to significantly improve the SSR from its current rate of 39% at The Ottawa Hospital to a remarkable 100%. The research is structured around five peer-reviewed journal papers, each addressing a specific aspect of the optimization of surgical efficiency. The first paper employs descriptive analytics to examine the factors influencing delays and overtime pay during surgeries. By identifying and analyzing these factors, insights are gained into the underlying causes of surgery inefficiencies. The second paper proposes three frameworks aimed at improving Operating Room (OR) throughput. These frameworks provide structured guidelines and strategies to enhance the overall efficiency of surgeries, encompassing preoperative, intraoperative, and postoperative stages. By streamlining the workflow and minimizing bottlenecks, the proposed frameworks have the potential to significantly optimize surgical operations. The third paper outlines a set of actions required to transform a selected predictive system into a prescriptive one. By integrating AI algorithms with decision support mechanisms, the system can offer actionable recommendations to surgeons during surgeries. This transformative step holds tremendous potential in enhancing surgical outcomes while reducing time. The fourth paper introduces a benchmarking and monitoring system for the selected framework that predicts SSR. Leveraging historical data, this system utilizes supervised machine learning algorithms to forecast the likelihood of successful outcomes based on various surgical team and procedural parameters. By providing real-time monitoring and predictive insights, surgeons can proactively address potential risks and improve decision-making during surgeries. Lastly, an application paper demonstrates the practical implementation of the prescriptive analytics system. The case study highlights how the system optimizes the allocation of resources and enables the scheduling of additional surgeries on days with a high predicted SSR. By leveraging the system's capabilities, hospitals can maximize their surgical capacity and improve overall patient care.
155

Confinement tuning of a 0-D plasma dynamics model

Hill, Maxwell D. 27 May 2016 (has links)
Investigations of tokamak dynamics, especially as they relate to the challenge of burn control, require an accurate representation of energy and particle confinement times. While the ITER-98 scaling law represents a correlation of data from a wide range of tokamaks, confinement scaling laws will need to be fine-tuned to specific operational features of specific tokamaks in the future. A methodology for developing, by regression analysis, tokamak- and configuration-specific confinement tuning models is presented and applied to DIII-D as an illustration. It is shown that inclusion of tuning parameters in the confinement models can significantly enhance the agreement between simulated and experimental temperatures relative to simulations in which only the ITER-98 scaling law is used. These confinement tuning parameters can also be used to represent the effects of various heating sources and other plasma operating parameters on overall plasma performance and may be used in future studies to inform the selection of plasma configurations that are more robust against power excursions.
156

Analysis of charging and driving behavior of plugin electric vehicles through telematics controller data

Boston, Daniel Lewis 07 January 2016 (has links)
Very little information is known about the impact electrification has on driving behavior, or how drivers charge their electrified vehicles. The recent influx of electrified vehicles presents a new market of vehicles which allow drivers the option between electrical or conventional gasoline energy sources. The current battery capacity in full battery electric vehicles requires planning of routes not required of conventional vehicles, due to the limited range, extended charging times, and limited charging infrastructure. There is currently little information on how drivers react to these limitations. A number of current models of fully electric and plug-in hybrid electric vehicles, transmit data wirelessly on key-on, key-off, and charging events. The data includes battery state of charge, distance of miles driven on gasoline and electric, energy consumed, and many other parameters associated to driving and charging behavior. In this thesis, this data was then processed and analyzed to benchmark the performance and characteristics of driving and charging patterns. Vehicles were analyzed and contrasted based on model type, geographic location, length of ownership and other variables. This data was able to show benchmarks and parameters in aggregate for 56 weeks of electrified vehicle tracking. These parameters were compared to the EV Project, a large scale electrified vehicle study performed by Idaho National Labs, to confirm patterns of expected behavior. New parameters which were not present in the EV Project were analyzed and provided insight to charging and driving behavior not examined in any previous study on a large scale. This study provides benchmarks and conclusions on this new driving behavior, such as large scale analysis of brake regeneration performance and degradation of range anxiety. Analysis of the differences on charging and driving behavior between geographic regions and experience were examined, providing insight to how these variables affect performance and driving and charging patterns. Comparison of parameters established by the EV Project and new parameters analyzed in this report will help build a benchmark for future studies of electrified vehicles.
157

Analysis of new sentiment and its application to finance

Yu, Xiang January 2014 (has links)
We report our investigation of how news stories influence the behaviour of tradable financial assets, in particular, equities. We consider the established methods of turning news events into a quantifiable measure and explore the models which connect these measures to financial decision making and risk control. The study of our thesis is built around two practical, as well as, research problems which are determining trading strategies and quantifying trading risk. We have constructed a new measure which takes into consideration (i) the volume of news and (ii) the decaying effect of news sentiment. In this way we derive the impact of aggregated news events for a given asset; we have defined this as the impact score. We also characterise the behaviour of assets using three parameters, which are return, volatility and liquidity, and construct predictive models which incorporate impact scores. The derivation of the impact measure and the characterisation of asset behaviour by introducing liquidity are two innovations reported in this thesis and are claimed to be contributions to knowledge. The impact of news on asset behaviour is explored using two sets of predictive models: the univariate models and the multivariate models. In our univariate predictive models, a universe of 53 assets were considered in order to justify the relationship of news and assets across 9 different sectors. For the multivariate case, we have selected 5 stocks from the financial sector only as this is relevant for the purpose of constructing trading strategies. We have analysed the celebrated Black-Litterman model (1991) and constructed our Bayesian multivariate predictive models such that we can incorporate domain expertise to improve the predictions. Not only does this suggest one of the best ways to choose priors in Bayesian inference for financial models using news sentiment, but it also allows the use of current and synchronised data with market information. This is also a novel aspect of our work and a further contribution to knowledge.
158

Semantic Analysis in Web Usage Mining

Norguet, Jean-Pierre E 20 March 2006 (has links)
With the emergence of the Internet and of the World Wide Web, the Web site has become a key communication channel in organizations. To satisfy the objectives of the Web site and of its target audience, adapting the Web site content to the users' expectations has become a major concern. In this context, Web usage mining, a relatively new research area, and Web analytics, a part of Web usage mining that has most emerged in the corporate world, offer many Web communication analysis techniques. These techniques include prediction of the user's behaviour within the site, comparison between expected and actual Web site usage, adjustment of the Web site with respect to the users' interests, and mining and analyzing Web usage data to discover interesting metrics and usage patterns. However, Web usage mining and Web analytics suffer from significant drawbacks when it comes to support the decision-making process at the higher levels in the organization. Indeed, according to organizations theory, the higher levels in the organizations need summarized and conceptual information to take fast, high-level, and effective decisions. For Web sites, these levels include the organization managers and the Web site chief editors. At these levels, the results produced by Web analytics tools are mostly useless. Indeed, most of these results target Web designers and Web developers. Summary reports like the number of visitors and the number of page views can be of some interest to the organization manager but these results are poor. Finally, page-group and directory hits give the Web site chief editor conceptual results, but these are limited by several problems like page synonymy (several pages contain the same topic), page polysemy (a page contains several topics), page temporality, and page volatility. Web usage mining research projects on their part have mostly left aside Web analytics and its limitations and have focused on other research paths. Examples of these paths are usage pattern analysis, personalization, system improvement, site structure modification, marketing business intelligence, and usage characterization. A potential contribution to Web analytics can be found in research about reverse clustering analysis, a technique based on self-organizing feature maps. This technique integrates Web usage mining and Web content mining in order to rank the Web site pages according to an original popularity score. However, the algorithm is not scalable and does not answer the page-polysemy, page-synonymy, page-temporality, and page-volatility problems. As a consequence, these approaches fail at delivering summarized and conceptual results. An interesting attempt to obtain such results has been the Information Scent algorithm, which produces a list of term vectors representing the visitors' needs. These vectors provide a semantic representation of the visitors' needs and can be easily interpreted. Unfortunately, the results suffer from term polysemy and term synonymy, are visit-centric rather than site-centric, and are not scalable to produce. Finally, according to a recent survey, no Web usage mining research project has proposed a satisfying solution to provide site-wide summarized and conceptual audience metrics. In this dissertation, we present our solution to answer the need for summarized and conceptual audience metrics in Web analytics. We first described several methods for mining the Web pages output by Web servers. These methods include content journaling, script parsing, server monitoring, network monitoring, and client-side mining. These techniques can be used alone or in combination to mine the Web pages output by any Web site. Then, the occurrences of taxonomy terms in these pages can be aggregated to provide concept-based audience metrics. To evaluate the results, we implement a prototype and run a number of test cases with real Web sites. According to the first experiments with our prototype and SQL Server OLAP Analysis Service, concept-based metrics prove extremely summarized and much more intuitive than page-based metrics. As a consequence, concept-based metrics can be exploited at higher levels in the organization. For example, organization managers can redefine the organization strategy according to the visitors' interests. Concept-based metrics also give an intuitive view of the messages delivered through the Web site and allow to adapt the Web site communication to the organization objectives. The Web site chief editor on his part can interpret the metrics to redefine the publishing orders and redefine the sub-editors' writing tasks. As decisions at higher levels in the organization should be more effective, concept-based metrics should significantly contribute to Web usage mining and Web analytics.
159

A framework for knowledge discovery within business intelligence for decision support

Basra, Rajveer Singh January 2008 (has links)
Business Intelligence (BI) techniques provide the potential to not only efficiently manage but further analyse and apply the collected information in an effective manner. Benefiting from research both within industry and academia, BI provides functionality for accessing, cleansing, transforming, analysing and reporting organisational datasets. This provides further opportunities for the data to be explored and assist organisations in the discovery of correlations, trends and patterns that exist hidden within the data. This hidden information can be employed to provide an insight into opportunities to make an organisation more competitive by allowing manager to make more informed decisions and as a result, corporate resources optimally utilised. This potential insight provides organisations with an unrivalled opportunity to remain abreast of market trends. Consequently, BI techniques provide significant opportunity for integration with Decision Support Systems (DSS). The gap which was identified within the current body of knowledge and motivated this research, revealed that currently no suitable framework for BI, which can be applied at a meta-level and is therefore tool, technology and domain independent, currently exists. To address the identified gap this study proposes a meta-level framework: - ‘KDDS-BI’, which can be applied at an abstract level and therefore structure a BI investigation, irrespective of the end user. KDDS-BI not only facilitates the selection of suitable techniques for BI investigations, reducing the reliance upon ad-hoc investigative approaches which rely upon ‘trial and error’, yet further integrates Knowledge Management (KM) principles to ensure the retention and transfer of knowledge due to a structured approach to provide DSS that are based upon the principles of BI. In order to evaluate and validate the framework, KDDS-BI has been investigated through three distinct case studies. First KDDS-BI facilitates the integration of BI within ‘Direct Marketing’ to provide innovative solutions for analysis based upon the most suitable BI technique. Secondly, KDDS-BI is investigated within sales promotion, to facilitate the selection of tools and techniques for more focused in store marketing campaigns and increase revenue through the discovery of hidden data, and finally, operations management is analysed within a highly dynamic and unstructured environment of the London Underground Ltd. network through unique a BI solution to organise and manage resources, thereby increasing the efficiency of business processes. The three case studies provide insight into not only how KDDS-BI provides structure to the integration of BI within business process, but additionally the opportunity to analyse the performance of KDDS-BI within three independent environments for distinct purposes provided structure through KDDS-BI thereby validating and corroborating the proposed framework and adding value to business processes.
160

Metodolgía para estimar el impacto que generan las llamadas realizadas en un call center en la fuga de los clientes utilizando técnicas de text mining

Sepúlveda Jullian, Catalina January 2015 (has links)
Ingeniera Civil Industrial / La industria de las telecomunicaciones está en constante crecimiento debido al desarrollo de las tecnologías y a la necesidad creciente de las personas de estar conectadas. Por lo mismo es que presenta un alto grado de competitividad y los clientes son libres de elegir la opción que más les acomode y cumpla con sus expectativas. De esta forma la predicción de fuga, y con ello la retención de clientes, son factores fundamentales para el éxito de una compañía. Sin embargo, dados los altos grados de competitividad entre las distintas empresas, se hace necesario innovar en cuanto a modelos de fuga utilizando nuevas fuentes de información, como lo son las llamadas al Call Center. Es así como el objetivo general de este trabajo es medir el impacto que generan las llamadas realizadas en el Call Center en la predicción de fuga de los clientes. Para lograr lo anterior se cuenta con información de las interacciones que tienen los clientes con el Call Center, específicamente el texto de cada llamada. Para extraer información sobre el contenido de las llamadas se aplicó un modelo de detección de tópicos sobre el texto para así conocer los temas tratados y utilizar esta información en los modelos de fuga. Los resultados obtenidos luego de realizar diversos modelos logit de predicción de fuga, muestran que al utilizar tanto la información de las llamadas como la del cliente (demográfica y transaccional), el modelo es superior en accuracy en un 8.7% a uno que no utiliza esta nueva fuente de información. Además el modelo con ambos tipos de variables presenta un error tipo I un 25% menor a un modelo que no incluye el contenido de las llamadas. Tras los análisis realizados es posible concluir que las llamadas al Call Center sí son relevantes y de ayuda al momento de predecir la fuga de un cliente, ya que logran aumentar la capacidad predictiva y ajuste del modelo. Además de que entregan nueva información sobre el comportamiento del cliente y es posible detectar aquellos tópicos que puedan estar asociados con la fuga, lo que permite tomar acciones correctivas.

Page generated in 0.1357 seconds