21 |
Knowledge-Based Predictive Maintenance for Fleet ManagementKilleen, Patrick 17 January 2020 (has links)
In recent years, advances in information technology have led to an increasing number of devices (or things) being connected to the internet; the resulting data can be used by applications to acquire new knowledge. The Internet of Things (IoT) (a network of computing devices that have the ability to interact with their environment without requiring user interaction) and big data (a field that deals with the exponentially increasing rate of data creation, which is a challenge for the cloud in its current state and for standard data analysis technologies) have become hot topics. With all this data being produced, new applications such as predictive maintenance are possible. One such application is monitoring a fleet of vehicles in real-time to predict their remaining useful life, which could help companies lower their fleet management costs by reducing their fleet's average vehicle downtime. Consensus self-organized models (COSMO) approach is an example of a predictive maintenance system for a fleet of public transport buses, which attempts to diagnose faulty buses that deviate from the rest of the bus fleet. The present work proposes a novel IoT-based architecture for predictive maintenance that consists of three primary nodes: namely, the vehicle node (VN), the server leader node (SLN), and the root node (RN). The VN represents the vehicle and performs lightweight data acquisition, data analytics, and data storage. The VN is connected to the fleet via its wireless internet connection. The SLN is responsible for managing a region of vehicles, and it performs more heavy-duty data storage, fleet-wide analytics, and networking. The RN is the central point of administration for the entire system. It controls the entire fleet and provides the application interface to the fleet system. A minimally viable prototype (MVP) of the proposed architecture was implemented and deployed to a garage of the Soci\'et\'e de Transport de l'Outaouais (STO), Gatineau, Canada. The VN in the MVP was implemented using a Raspberry Pi, which acquired sensor data from a STO hybrid bus by reading from a J1939 network, the SLN was implemented using a laptop, and the RN was deployed using meshcentral.com. The goal of the MVP was to perform predictive maintenance for the STO to help reduce their fleet management costs.
The present work also proposes a fleet-wide unsupervised dynamic sensor selection algorithm, which attempts to improve the sensor selection performed by the COSMO approach. I named this algorithm the improved consensus self-organized models (ICOSMO) approach. To analyze the performance of ICOSMO, a fleet simulation was implemented. The J1939 data gathered from a STO hybrid bus, which was acquired using the MVP, was used to generate synthetic data to simulate vehicles, faults, and repairs. The deviation detection of the COSMO and ICOSMO approach was applied to the synthetic sensor data. The simulation results were used to compare the performance of the COSMO and ICOSMO approach. Results revealed that in general ICOSMO improved the accuracy of COSMO when COSMO was not performing optimally; that is, in the following situations: a) when the histogram distance chosen by COSMO was a poor choice, b) in an environment with relatively high sensor white noise, and c) when COSMO selected poor sensors. On average ICOSMO only rarely reduced the accuracy of COSMO, which is promising since it suggests deploying ICOSMO as a predictive maintenance system should perform just as well or better than COSMO . More experiments are required to better understand the performance of ICOSMO. The goal is to eventually deploy ICOSMO to the MVP.
|
22 |
Bodies of Data: The Social Production of Predictive AnalyticsMadisson Whitman (6881324) 26 June 2020 (has links)
Bodies of Data challenges the promise of big data in knowing and organizing people by explicating how data are made and theorizing mismatches between actors, data, and institutions. Situated at a large public university in the United States that hosts approximately 30,000 undergraduate students, this research ethnographically traces the development and deployment of an app for student success that draws from traditional (demographic information, enrollment history, grade distributions) and non-traditional (WiFi network usage, card swipes, learning management systems) student data to anticipate the likelihood of graduation in a four-year period. The app, which offers an interface for students based on nudging, is the product of collaborations between actors who specialize in educational technology. As these actors manage the app, they must also interpret data against the students who generate those data, many of whom do not neatly mirror their data counterparts. The central question animating this research asks how the designers of the app create order—whether through material bodies that are knowable to data collection or reorganized demographic groupings—as they render students into data.<br><br>To address this question and investigate practices of making data, I conducted 12 months of ethnographic fieldwork, using participant observation and interviewing with university administrators, data scientists, app developers, and undergraduate students. Through a theoretical approach informed by anthropology, science and technology studies, critical data studies, and feminist theory, I analyze how data and the institution make each other through the modeling of student bodies and reshaping of subjectivity. I leverage technical glitches—slippages between students and their data—and failure at large at the institution as analytics to both expose otherwise hidden processes of ordering and productively read failure as an opportunity for imagining what data could do. Predictive projects that derive from big data are increasingly common in higher education as institutions look to data to understand populations. Bodies of Data empirically provides evidence regarding how data are made through sociotechnical processes, in which data are not for understanding but for ordering. As universities look to big data to inform decision-making, the findings of this research contradict assumptions that data provide neutral and objective ways of knowing students.
|
23 |
Modeling strategies using predictive analytics : Forecasting future sales and churn management / Strategier för modelleringmedprediktiv analysAronsson, Henrik January 2015 (has links)
This project was carried out for a company named Attollo, a consulting firm specialized in Business Intelligence and Corporate Performance Management. The project aims to explore a new area for Attollo, predictive analytics, which is then applied to Klarna, a client of Attollo. Attollo has a partnership with IBM, which sells services for predictive analytics. The tool that this project is carried out with, is a software from IBM: SPSS Modeler. Five different examples are given of what and how the predictive work that was carried out at Klarna consisted of. From these examples, the different predictive models' functionality are described. The result of this project demonstrates, by using predictive analytics, how predictive models can be created. The conclusion is that predictive analytics enables companies to understand their customers better and hence make better decisions. / Detta projekt har utforts tillsammans med ett foretag som heter Attollo, en konsultfirma som ar specialiserade inom Business Intelligence & Coporate Performance Management. Projektet grundar sig pa att Attollo ville utforska ett nytt omrade, prediktiv analys, som sedan applicerades pa Klarna, en kund till Attollo. Attollo har ett partnerskap med IBM, som saljer tjanster for prediktiv analys. Verktyget som detta projekt utforts med, ar en mjukvara fran IBM: SPSS Modeler. Fem olika exempel beskriver det prediktiva arbetet som utfordes vid Klarna. Fran dessa exempel beskrivs ocksa de olika prediktiva modellernas funktionalitet. Resultatet av detta projekt visar hur man genom prediktiv analys kan skapa prediktiva modeller. Slutsatsen ar att prediktiv analys ger foretag storre mojlighet att forsta sina kunder och darav kunna gora battre beslut.
|
24 |
Machine Learning Predictive Analytics for Player Movement Prediction in NBA: Applications, Opportunities, and ChallengesStephanos, Dembe K., Husari, Ghaith, Bennett, Brian T., Stephanos, Emma 15 April 2021 (has links)
Recently, strategies of National Basketball Association (NBA) teams have evolved with the skillsets of players and the emergence of advanced analytics. This has led to a more free-flowing game in which traditional positions and play calls have been replaced with player archetypes and read-and-react offensives that operate off a variety of isolated actions. The introduction of position tracking technology by SportVU has aided the analysis of these patterns by offering a vast dataset of on-court behavior. There have been numerous attempts to identify and classify patterns by evaluating the outcomes of offensive and defensive strategies associated with actions within this dataset, a job currently done manually by reviewing game tape. Some of these classification attempts have used supervised techniques that begin with labeled sets of plays and feature sets to automate the detection of future cases. Increasingly, however, deep learning approaches such as convolutional neural networks have been used in conjunction with player trajectory images generated from positional data. This enables classification to occur in a bottom-up manner, potentially discerning unexpected patterns. Others have shifted focus from classification, instead using this positional data to evaluate the success of a given possession based on spatial factors such as defender proximity and player factors such as role or skillset. While play/action detection, classification and analysis have each been addressed in literature, a comprehensive approach that accounts for modern trends is still lacking. In this paper, we discuss various approaches to action detection and analysis and ultimately propose an outline for a deep learning approach of identification and analysis resulting in a queryable dataset complete with shot evaluations, thus combining multiple contributions into a serviceable tool capable of assisting and automating much of the work currently done by NBA professionals.
|
25 |
Proactive IT Incident Prevention: Using Data Analytics to Reduce Service InterruptionsMalley, Mark G. 01 January 2017 (has links)
The cost of resolving user requests for IT assistance rises annually. Researchers have demonstrated that data warehouse analytic techniques can improve service, but they have not established the benefit of using global organizational data to reduce reported IT incidents. The purpose of this quantitative, quasi-experimental study was to examine the extent to which IT staff use of organizational knowledge generated from data warehouse analytical measures reduces the number of IT incidents over a 30-day period, as reported by global users of IT within an international pharmaceutical company headquartered in Germany. Organizational learning theory was used to approach the theorized relationship between organizational knowledge and user calls received. Archival data from an internal help desk ticketing system was the source of data, with access provided by the organization under study. The population for this study was all calls logged and linked to application systems registered in a configuration database, and the sample was the top 14 application systems with the highest call volume that were under the control of infrastructure management. Based on an analysis of the data using a split-plot ANOVA (SPANOVA) of Time 1, Time 2, treatment, and nontreatment data, there was a small reduction in calls in the number of reported IT incidents in the treatment group, though the reduction was not statistically significant. Implications for positive social change include reassigning employees to other tasks, rather than continuing efforts in this area, enabling employees to support alternative initiatives to drive the development of innovative therapies benefiting patients and improving employee satisfaction.
|
26 |
Accelerated Online and Hybrid RN-to-BSN Programs: A Predictive Retention AlgorithmKnight, Melissa 01 January 2019 (has links)
Predicting retention and time to graduation within accelerated online and a hybrid RN-to-BSN programs are significant elements in leveraging the pipeline of qualified RNs with BSN degrees, but the literature lacks significant accounts of retention and time to graduation outcomes within these programs and predictive algorithm developments to offset high attrition rates. The purpose of this study was to quantitatively examine the relationships between pre-entry attributes, academic integration, and institutional characteristics on retention and time to graduation within accelerated online RN-to-BSN programs in order to begin developing a global predictive retention algorithm. This study was guided by Tinto's theories of integration and student departure (1975, 1984, 1993) and Rovai's composite persistence model. Retrospective datasets from 390 student academic records were obtained. Findings of this study revealed pre-entry GPA, number of education credits, enrollment status, 1st and 2nd course grades and GPA index scores, failed course type, size and geographic region, admission GPA standards, prerequisite criteria, academic support and retention methods were statistically significant predictors of retention and timely graduation (p <.05). A decision tree model was performed in SPSS modeler to compare multiple regression and binary logistic regression results, yielding a 96% accuracy rate on retention predictions and a 46 % on timely graduation predictions. Recommendations for future research are to examine other variables that may be associated with retention and time to graduation for results can be used to redevelop accurate predictive retention models. Having accurate predictive retention models will affect positive social change because RN-to-BSN students that successfully complete a BSN degree will impact the quality and safety of patient care.
|
27 |
Predictive Analytics in Occupational Safety for Health Care in Ohio to Reduce InjuriesKlaiber, Marina January 2021 (has links)
No description available.
|
28 |
A big data augmented analytics platform to operationalize efficiencies at community clinicsKunjan, Kislaya 15 April 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Community Health Centers (CHCs) play a pivotal role in delivery of primary healthcare to
the underserved, yet have not benefited from a modern data analytics platform that can support
clinical, operational and financial decision making across the continuum of care. This research is
based on a systems redesign collaborative of seven CHC organizations spread across Indiana to
improve efficiency and access to care.
Three research questions (RQs) formed the basis of this research, each of which seeks to
address known knowledge gaps in the literature and identify areas for future research in health
informatics. The first RQ seeks to understand the information needs to support operations at
CHCs and implement an information architecture to support those needs. The second RQ
leverages the implemented data infrastructure to evaluate how advanced analytics can guide
open access scheduling – a specific use case of this research. Finally, the third RQ seeks to
understand how the data can be visualized to support decision making among varying roles in
CHCs.
Based on the unique work and information flow needs uncovered at these CHCs, an end
to-end analytics solution was designed, developed and validated within the framework of a rapid
learning health system. The solution comprised of a novel heterogeneous longitudinal clinic data
warehouse augmented with big data technologies and dashboard visualizations to inform CHCs
regarding operational priorities and to support engagement in the systems redesign initiative.
Application of predictive analytics on the health center data guided the implementation of open
access scheduling and up to a 15% reduction in the missed appointment rates. Performance
measures of importance to specific job profiles within the CHCs were uncovered. This was
followed by a user-centered design of an online interactive dashboard to support rapid
assessments of care delivery. The impact of the dashboard was assessed over time and formally
validated through a usability study involving cognitive task analysis and a system usability scale
questionnaire. Wider scale implementation of the data aggregation and analytics platform through
regional health information networks could better support a range of health system redesign
initiatives in order to address the national ‘triple aim’ of healthcare.
|
29 |
An interoperable electronic medical record-based platform for personalized predictive analyticsAbedtash, Hamed 31 May 2017 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Precision medicine refers to the delivering of customized treatment to patients based on their individual characteristics, and aims to reduce adverse events, improve diagnostic methods, and enhance the efficacy of therapies. Among efforts to achieve the goals of precision medicine, researchers have used observational data for developing predictive modeling to best predict health outcomes according to patients’ variables. Although numerous predictive models have been reported in the literature, not all models present high prediction power, and as the result, not all models may reach clinical settings to help healthcare professionals make clinical decisions at the point-of-care. The lack of generalizability stems from the fact that no comprehensive medical data repository exists that has the information of all patients in the target population. Even if the patients’ records were available from other sources, the datasets may need further processing prior to data analysis due to differences in the structure of databases and the coding systems used to record concepts. This project intends to fill the gap by introducing an interoperable solution that receives patient electronic health records via Health Level Seven (HL7) messaging standard from other data sources, transforms the records to observational medical outcomes partnership (OMOP) common data model (CDM) for population health research, and applies predictive models on patient data to make predictions about health outcomes. This project comprises of three studies. The first study introduces CCD-TOOMOP parser, and evaluates OMOP CDM to accommodate patient data transferred by HL7 consolidated continuity of care documents (CCDs). The second study explores how to adopt predictive model markup language (PMML) for standardizing dissemination of OMOP-based predictive models. Finally, the third study introduces Personalized Health Risk Scoring Tool (PHRST), a pilot, interoperable OMOP-based model scoring tool that processes the embedded models and generates risk scores in a real-time manner.
The final product addresses objectives of precision medicine, and has the potentials to not only be employed at the point-of-care to deliver individualized treatment to patients, but also can contribute to health outcome research by easing collecting clinical outcomes across diverse medical centers independent of system specifications.
|
30 |
Technical Debt Decision-Making FrameworkCodabux, Zadia 09 December 2016 (has links)
Software development companies strive to produce high-quality software. In commercial software development environments, due to resource and time constraints, software is often developed hastily which gives rise to technical debt. Technical debt refers to the consequences of taking shortcuts when developing software. These consequences include making the system difficult to maintain and defect prone. Technical debt can have financial consequences and impede feature enhancements. Identifying technical debt and deciding which debt to address is challenging given resource constraints. Project managers must decide which debt has the highest priority and is most critical to the project. This decision-making process is not standardized and sometimes differs from project to project. My research goal is to develop a framework that project managers can use in their decision-making process to prioritize technical debt based on its potential impact. To achieve this goal, we survey software practitioners, conduct literature reviews, and mine software repositories for historical data to build a framework to model the technical debt decision-making process and inform practitioners of the most critical debt items.
|
Page generated in 0.0357 seconds