Spelling suggestions: "subject:"datadriven approach"" "subject:"datadrivet approach""
1 |
Modeling and optimization of wastewater treatment process with a data-driven approachWei, Xiupeng 01 May 2013 (has links)
The primary objective of this research is to model and optimize wastewater treatment process in a wastewater treatment plant (WWTP). As the treatment process is complex, its operations pose challenges. Traditional physics-based and mathematical- models have limitations in predicting the behavior of the wastewater process and optimization of its operations.
Automated control and information technology enables continuous collection of data. The collected data contains process information allowing to predict and optimize the process.
Although the data offered by the WWTP is plentiful, it has not been fully used to extract meaningful information to improve performance of the plant. A data-driven approach is promising in identifying useful patterns and models using algorithms versed in statistics and computational intelligence. Successful data-mining applications have been reported in business, manufacturing, science, and engineering.
The focus of this research is to model and optimize the wastewater treatment process and ultimately improve efficiency of WWTPs. To maintain the effluent quality, the influent flow rate, the influent pollutants including the total suspended solids (TSS) and CBOD, are predicted in short-term and long-term to provide information to efficiently operate the treatment process. To reduce energy consumption and improve energy efficiency, the process of biogas production, activated sludge process and pumping station are modeled and optimized with evolutionary computation algorithms.
Modeling and optimization of wastewater treatment processes faces three major challenges. The first one is related to the data. As wastewater treatment includes physical, chemical, and biological processes, and instruments collecting large volumes of data. Many variables in the dataset are strongly coupled. The data is noisy, uncertain, and incomplete. Therefore, several preprocessing algorithms should be used to preprocess the data, reduce its dimensionality, and determine import variables. The second challenge is in the temporal nature of the process. Different data-mining algorithms are used to obtain accurate models. The last challenge is the optimization of the process models. As the models are usually highly nonlinear and dynamic, novel evolutionary computational algorithms are used.
This research addresses these three challenges. The major contribution of this research is in modeling and optimizing the wastewater treatment process with a data-driven approach. The process model built is then optimized with evolutionary computational algorithms to find the optimal solutions for improving process efficiency and reducing energy consumption.
|
2 |
Data Driven Approaches to Testing Homogeneity of Intraclass Correlation CoefficientsWu, Baohua 01 December 2010 (has links)
The test of homogeneity for intraclass correlation coefficients has been one of the active topics in statistical research. Several chi-square tests have been proposed to test the homogeneity of intraclass correlations in the past few decades. The big concern for them is that these methods are seriously biased when sample sizes are not large. In this thesis, data driven approaches are proposed to testing the homogeneity of intraclass correlation coefficients of several populations. Through simulation study, data driven methods have been proved to be less biased and accurate than some commonly used chi-square tests.
|
3 |
Developing Materials Informatics Workbench for Expediting the Discovery of Novel Compound MaterialsKwok Wai Steny Cheung Unknown Date (has links)
This project presents a Materials Informatics Workbench that resolves the challenges confronting materials scientists in the aspects of materials science data assimilation and dissemination. It adopts an approach that has ingeniously combined and extended the technologies of the Semantic Web, Web Service Business Process Execution Language (WSBPEL) and Open Archive Initiative Object Reuse and Exchange (OAI-ORE). These technologies enable the development of novel user interfaces and innovative algorithms and techniques behind the major components of the proposed workbench. In recent years, materials scientists have been struggling with the challenge of dealing with the ever-increasing amount of complex materials science data that are available from online sources and generated by the high-throughput laboratory instruments and data-intensive software tools, respectively. Meanwhile, the funding organizations have encouraged, and even mandated, the sponsored researchers across many domains to make the scientifically-valuable data, together with the traditional scholarly publications, available to the public. This open access requirement provides the opportunity for materials scientists who are able to exploit the available data to expedite the discovery of novel compound materials. However, it also poses challenges for them. The materials scientists raise concerns about the difficulties of precisely locating and processing diverse, but related, data from different data sources and of effectively managing laboratory information and data. In addition, they also lack the simple tools for data access and publication, and require measures for Intellectual Property protection and standards for data sharing, exchange and reuse. The following paragraphs describe how the major workbench components resolve these challenges. First, the materials science ontology, represented in the Web Ontology Language (OWL), enables, (1) the mapping between and the integration of the disparate materials science databases, (2) the modelling of experimental provenance information acquired in the physical and digital domains and, (3) the inferencing and extraction of new knowledge within the materials science domain. Next, the federated search interface based on the materials science ontology enables the materials scientists to search, retrieve, correlate and integrate diverse, but related, materials science data and information across disparate databases. Then, a workflow management system underpinning the WSBPEL engine is not only able to manage the scientific investigation process that incorporates multidisciplinary scientists distributed over a wide geographic region and self-contained computational services, but also systematically acquire the experimental data and information generated by the process. Finally, the provenance-aware scientific compound-object publishing system provides the scientists with a view of the highly complex scientific workflow at multiple-grained levels. Thus, they can easily comprehend the science of the workflow, access experimental information and keep the confidential information from unauthorised viewers. It also enables the scientists to quickly and easily author and publish a scientific compound object that, (1) incorporates not only the internal experimental data with the provenance information from the rendered view of a scientific experimental workflow, but also external digital objects with the metadata, for example, published scholarly papers discoverable via the World Wide Web (the Web), (2) is self- contained and explanatory with IP protection and, (3) is guaranteed to be disseminated widely on the Web. The prototype systems of the major workbench components have been developed. The quality of the material science ontology has been assessed, based on Gruber’s principles for the design of ontologies used for knowledge–sharing, while its applicability has been evaluated through two of the workbench components, the ontology-based federated search interface and the provenance-aware scientific compound object publishing system. Those prototype systems have been deployed within a team of fuel cell scientists working within the Australian Institute for Bioengineering and Nanotechnology (AIBN) at the University of Queensland. Following the user evaluation, the overall feedback to date has been very positive. First, the scientists were impressed with the convenience of the ontology-based federated search interface because of the easy and quick access to the integrated databases and analytical tools. Next, they felt the surge of the relief that the complex compound synthesis process could be managed by and monitored through the WSBPEL workflow management system. They were also excited because the system is able to systematically acquire huge amounts of complex experimental data produced by self-contained computational services that is no longer handled manually with paper-based laboratory notebooks. Finally, the scientific compound object publishing system inspired them to publish their data voluntarily, because it provides them with a scientific-friendly and intuitive interface that enables scientists to, (1) intuitively access experimental data and information, (2) author self-contained and explanatory scientific compound objects that incorporate experimental data and information about research outcomes, and published scholarly papers and peer-reviewed datasets to strengthen those outcomes, (3) enforce proper measures for IP protection, (4) comply those objects with the Open Archives Initiative Protocol – Object Exchange and Reuse (OAI-ORE) to maximize its dissemination over the Web and,(5) ingest those objects into a Fedora-based digital library.
|
4 |
Developing Materials Informatics Workbench for Expediting the Discovery of Novel Compound MaterialsKwok Wai Steny Cheung Unknown Date (has links)
This project presents a Materials Informatics Workbench that resolves the challenges confronting materials scientists in the aspects of materials science data assimilation and dissemination. It adopts an approach that has ingeniously combined and extended the technologies of the Semantic Web, Web Service Business Process Execution Language (WSBPEL) and Open Archive Initiative Object Reuse and Exchange (OAI-ORE). These technologies enable the development of novel user interfaces and innovative algorithms and techniques behind the major components of the proposed workbench. In recent years, materials scientists have been struggling with the challenge of dealing with the ever-increasing amount of complex materials science data that are available from online sources and generated by the high-throughput laboratory instruments and data-intensive software tools, respectively. Meanwhile, the funding organizations have encouraged, and even mandated, the sponsored researchers across many domains to make the scientifically-valuable data, together with the traditional scholarly publications, available to the public. This open access requirement provides the opportunity for materials scientists who are able to exploit the available data to expedite the discovery of novel compound materials. However, it also poses challenges for them. The materials scientists raise concerns about the difficulties of precisely locating and processing diverse, but related, data from different data sources and of effectively managing laboratory information and data. In addition, they also lack the simple tools for data access and publication, and require measures for Intellectual Property protection and standards for data sharing, exchange and reuse. The following paragraphs describe how the major workbench components resolve these challenges. First, the materials science ontology, represented in the Web Ontology Language (OWL), enables, (1) the mapping between and the integration of the disparate materials science databases, (2) the modelling of experimental provenance information acquired in the physical and digital domains and, (3) the inferencing and extraction of new knowledge within the materials science domain. Next, the federated search interface based on the materials science ontology enables the materials scientists to search, retrieve, correlate and integrate diverse, but related, materials science data and information across disparate databases. Then, a workflow management system underpinning the WSBPEL engine is not only able to manage the scientific investigation process that incorporates multidisciplinary scientists distributed over a wide geographic region and self-contained computational services, but also systematically acquire the experimental data and information generated by the process. Finally, the provenance-aware scientific compound-object publishing system provides the scientists with a view of the highly complex scientific workflow at multiple-grained levels. Thus, they can easily comprehend the science of the workflow, access experimental information and keep the confidential information from unauthorised viewers. It also enables the scientists to quickly and easily author and publish a scientific compound object that, (1) incorporates not only the internal experimental data with the provenance information from the rendered view of a scientific experimental workflow, but also external digital objects with the metadata, for example, published scholarly papers discoverable via the World Wide Web (the Web), (2) is self- contained and explanatory with IP protection and, (3) is guaranteed to be disseminated widely on the Web. The prototype systems of the major workbench components have been developed. The quality of the material science ontology has been assessed, based on Gruber’s principles for the design of ontologies used for knowledge–sharing, while its applicability has been evaluated through two of the workbench components, the ontology-based federated search interface and the provenance-aware scientific compound object publishing system. Those prototype systems have been deployed within a team of fuel cell scientists working within the Australian Institute for Bioengineering and Nanotechnology (AIBN) at the University of Queensland. Following the user evaluation, the overall feedback to date has been very positive. First, the scientists were impressed with the convenience of the ontology-based federated search interface because of the easy and quick access to the integrated databases and analytical tools. Next, they felt the surge of the relief that the complex compound synthesis process could be managed by and monitored through the WSBPEL workflow management system. They were also excited because the system is able to systematically acquire huge amounts of complex experimental data produced by self-contained computational services that is no longer handled manually with paper-based laboratory notebooks. Finally, the scientific compound object publishing system inspired them to publish their data voluntarily, because it provides them with a scientific-friendly and intuitive interface that enables scientists to, (1) intuitively access experimental data and information, (2) author self-contained and explanatory scientific compound objects that incorporate experimental data and information about research outcomes, and published scholarly papers and peer-reviewed datasets to strengthen those outcomes, (3) enforce proper measures for IP protection, (4) comply those objects with the Open Archives Initiative Protocol – Object Exchange and Reuse (OAI-ORE) to maximize its dissemination over the Web and,(5) ingest those objects into a Fedora-based digital library.
|
5 |
A data driven approach for automating vehicle activated signsJomaa, Diala January 2016 (has links)
Vehicle activated signs (VAS) display a warning message when drivers exceed a particular threshold. VAS are often installed on local roads to display a warning message depending on the speed of the approaching vehicles. VAS are usually powered by electricity; however, battery and solar powered VAS are also commonplace. This thesis investigated devel-opment of an automatic trigger speed of vehicle activated signs in order to influence driver behaviour, the effect of which has been measured in terms of reduced mean speed and low standard deviation. A comprehen-sive understanding of the effectiveness of the trigger speed of the VAS on driver behaviour was established by systematically collecting data. Specif-ically, data on time of day, speed, length and direction of the vehicle have been collected for the purpose, using Doppler radar installed at the road. A data driven calibration method for the radar used in the experiment has also been developed and evaluated. Results indicate that trigger speed of the VAS had variable effect on driv-ers’ speed at different sites and at different times of the day. It is evident that the optimal trigger speed should be set near the 85th percentile speed, to be able to lower the standard deviation. In the case of battery and solar powered VAS, trigger speeds between the 50th and 85th per-centile offered the best compromise between safety and power consump-tion. Results also indicate that different classes of vehicles report differ-ences in mean speed and standard deviation; on a highway, the mean speed of cars differs slightly from the mean speed of trucks, whereas a significant difference was observed between the classes of vehicles on lo-cal roads. A differential trigger speed was therefore investigated for the sake of completion. A data driven approach using Random forest was found to be appropriate in predicting trigger speeds respective to types of vehicles and traffic conditions. The fact that the predicted trigger speed was found to be consistently around the 85th percentile speed justifies the choice of the automatic model.
|
6 |
Quantitative analysis of 3D tissue deformation reveals key cellular mechanism associated with initial heart looping / 初期心ループ形成時における3次元組織動態の定量解析と細胞機構の解明Kawahira, Naofumi 27 July 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第22687号 / 医博第4631号 / 新制||医||1045(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 山下 潤, 教授 木村 剛, 教授 浅野 雅秀 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
|
7 |
What would be the highestelectrical loads with -20°C inStockholm in 2022 ? : A study of the sensitivity of electrical loads to outdoor temperature in Stockholm region.Mellon, Magali January 2022 (has links)
In the last 10 years, no significant increase in the peak electricity consumption of the region of Stockholm has been observed, despite new customers being connected to the grid. But, as urbanization continues and with electrification being a decisive step of decarbonization pathways, more growth is expected in the future. However, the Swedish Transmission System Operator (TSO), Svenska Kraftnat, can only supply a limited power to Stockholm region. Distribution System Operators (DSOs) such as Vattenfall Eldistribution, which operates two thirds Stockholm region's distribution grid, need to find solutions to satisfy an increasing demand with a limited power supply. In these times, forecasting the worst-case scenarios, i.e., the highest possible loads, becomes a critical question. In Sweden, peak loads are usually triggered by the coldest temperatures, but the recent winters have been mild: this brings uncertainty about a possible underlying temperature adjusted growth that would be masked by relatively warm winters. Answering the question 'What would be the highest loads in 2022 with -20°C in Stockholm region ?' could help Vattenfall Eldistribution estimating the flexibility needed nowadays and designing the future grid with the necessary grid reinforcements. This master thesis uses a data-driven approach based on eleven years of hourly data on the period 2010-2021 to investigate the temperature sensitivity of aggregated electricity load in Stockholm region. First, an exploratory analysis aims at quantifying how large the growth has been in the past ten years and at understanding how and when peak loads occur. The insights obtained help design two innovative regression techniques that investigate the evolution of the loads across years and provide first estimates of peak loads. Then, a Seasonal Autoregressive Integrated Moving Average with eXogenous regressors (SARIMAX) process is used to model a full winter of load as a function of temperatures. This third method provides new and more reliable estimates of peak loads in 2022 at e.g. -20°C. Eventually, the SARIMAX estimates are kept and a synthesis of the global outlooks of the three methods and possible extensions of the SARIMAX method is presented in a final section. The results conclude on a significant increase in the load levels in southern Stockholm ('Stockholm Sodra') between 2010 and 2015 and stable evolution onwards, while the electric consumption in Northern Stockholm remained stable during the period 2010-2021. During a very cold winter, the electricity demand is expected to exceed the subscription levels during about 300h in Stockholm Sodra and 200h in Stockholm Norra. However, this will be a rare occurrence, which suggests that short-term solutions could be privileged rather than costly grid extension work. Many questions arise, and the capability of local heat & power production and electricity prices signals to regulate today's demand are yet to investigate. Additional work exploring future demand scenarios at a smaller scale could also be contemplated. / Under den senaste årtionden har Stockholms toppkonsumtion av el inte ökat markant trots nya elkunder som ansluter till elnätet. Med en snabb urbanisering, är ökad elektrifiering en huvudlösning för att uppnå ett fossilfritt samhälle och denna trend förväntas fortsätta under kommande årtionden. Samtidigt börjar den svenska transmissionsnätoperatören (TSO) Svenska kraftnät få problem med att leverera elkraft till Stockholmsregionen, på grund av en begränsad överföringskapacitet. Därför måste lokala eldistributörer (DSO), liksom Vattenfall Eldistribution, som är Sveriges största DSO med systemansvar för distributionssystem, undersöka nya lösningar för att uppfylla den ökande efterfrågan på el. Det blir dessutom mycket viktigt att identifiera de värsta tänkbara scenario, som att göra prognos av högsta möjliga elförbrukning. Stockholm konsumerar exempelvis mest el när det är som kallast – men de senaste vintrarna har varit milda jämfört med till exempel vintrarna 2010 – 2011 eller 2012 – 2013 då temperaturer i Stockholmsregion mättes till under -20°C grader för flera dagar i sträck. Detta resulterar i en relevant frågeställning: ” Vad skulle Stockholms elkonsumtion vid -20°C bli 2021 eller 2022?”. Att kvantitativt kunna besvara denna fråga skulle hjälpa Vattenfall med att designa framtidens elnät samt se till att det finns rätt mängd flexibilitet i reserv i nuvarande Stockholm Flex elmarknad. Detta examensarbete utgår från att kvantitativt analysera denna frågeställning. Utgångsläget är ett datadrivet tillvägagångssätt baserat på tio års tidseriedata för att undersöka temperaturkänsligheten för det aggregerade elbehovet i Stockholmsregionen, och dra slutsatser om dess utveckling genom åren. I första hand, utförs en explorativ analys för att förstå när och hur toppbelastning kan hända. Då hjälper dessa insikter till att utforma två innovativa regressionsmetoder för att undersöka utvecklingen av elförbrukning under det senaste decenniet och uppskatta värdet på toppbelastningen. Därefter används ett säsongmässigt autoregressivt integrerat rörligt genomsnitt med exogena faktorer (SARIMAX) för att modellera en vinter som en funktion av temperaturerna. Denna tredje metod behandlar nya och mer tillförlitliga beräkningar av toppbelastning värden i 2022 på -20°C. Huvudslutsatser från examensarbetet är att elförbrukningen skulle öka i området Stockholm Södra speciellt mellan 2010 och 2015, medan elförbrukningen skulle vara stabil under hela perioden i området Stockholm Norra. Det finns en risk för att under ett antal timmar vid riktigt kall vinter, ha ett elbehov högre än Vattenfall Eldistributions summa av abonnemang. Dock är det väldigt låg sannolikhet att detta händer, vilket innebär att det förmodligen finns andra sätt att hantera denna efterfråga på el än att öka överföringskapaciteten i elnätet. Examensarbetet resulterar i flera frågor. Exempelvis att utreda möjligheter i att utnyttja lokala el och värmekraftverk och använda elprissignaler. Ytterligare arbete kan också undersöka scenarier av den framtida elförbrukning i en mindre skala.
|
8 |
Smart Meters Big Data : Behavioral Analytics via Incremental Data Mining and VisualizationSingh, Shailendra January 2016 (has links)
The big data framework applied to smart meters offers an exception platform for data-driven forecasting and decision making to achieve sustainable energy efficiency. Buying-in consumer confidence through respecting occupants' energy consumption behavior and preferences towards improved participation in various energy programs is imperative but difficult to obtain. The key elements for understanding and predicting household energy consumption are activities occupants perform, appliances and the times that appliances are used, and inter-appliance dependencies. This information can be extracted from the context rich big data from smart meters, although this is challenging because: (1) it is not trivial to mine complex interdependencies between appliances from multiple concurrent data streams; (2) it is difficult to derive accurate relationships between interval based events, where multiple appliance usage persist; (3) continuous generation of the energy consumption data can trigger changes in appliance associations with time and appliances. To overcome these challenges, we propose an unsupervised progressive incremental data mining technique using frequent pattern mining (appliance-appliance associations) and cluster analysis (appliance-time associations) coupled with a Bayesian network based prediction model. The proposed technique addresses the need to analyze temporal energy consumption patterns at the appliance level, which directly reflect consumers' behaviors and provide a basis for generalizing household energy models. Extensive experiments were performed on the model with real-world datasets and strong associations were discovered. The accuracy of the proposed model for predicting multiple appliances usage outperformed support vector machine during every stage while attaining accuracy of 81.65\%, 85.90\%, 89.58\% for 25\%, 50\% and 75\% of the training dataset size respectively. Moreover, accuracy results of 81.89\%, 75.88\%, 79.23\%, 74.74\%, and 72.81\% were obtained for short-term (hours), and long-term (day, week, month, and season) energy consumption forecasts, respectively.
|
9 |
Sensibilisation allergénique au cours des huit premières années de vie, facteurs et morbidité associés dans la cohorte de naissances PARIS / Allergic sensitization over the first eight years of life, associated factors and morbidity in PARIS birth cohortGabet, Stephan 02 October 2017 (has links)
Contexte. Les premières années de vie apparaissent particulièrement propices au développement de la sensibilisation allergénique. Objectifs. Cette thèse vise à : i) décrire les profils de sensibilisation allergénique chez le nourrisson et l’enfant, ii) étudier l’association entre ces profils et la morbidité allergique et iii) identifier les facteurs de risque de cette sensibilisation. Méthodes. Dans le cadre du suivi de la cohorte prospective de naissances en population générale Pollution and Asthma Risk: an Infant Study (PARIS), la sensibilisation allergénique a été évaluée chez 1 860 nourrissons à 18 mois et 1 007 enfants à 8/9 ans par dosage des IgE spécifiques dirigées contre 16 et 19 allergènes, respectivement. Les informations concernant la santé et le cadre de vie des enfants ont été recueillies par questionnaires standardisés répétés. Des profils de sensibilisation et des profils de morbidité ont été identifiés par classification non supervisée et mis en relation par régression logistique multinomiale. Enfin, les facteurs associés à la sensibilisation allergénique chez le nourrisson ont été étudiés par régression logistique multivariée. Résultats. Dès 18 mois, 13,8% des enfants étaient sensibilisés et 6,2%, multi-sensibilisés. À 8/9 ans, ces prévalences étaient de 34,5% et 19,8%, respectivement. Les profils de sensibilisation identifiés chez le nourrisson (3) et dans l’enfance (5) différaient au regard de la morbidité allergique. L’analyse étiologique a permis de préciser le rôle des expositions précoces aux allergènes et aux microorganismes sur la sensibilisation allergénique. Conclusion. Cette thèse contribue à une meilleure compréhension de l’histoire naturelle de la sensibilisation allergénique, et ce, dès les premières années de vie. Cette connaissance est essentielle à la prévention des maladies allergiques qui en découlent. / Background. The first years of life appear to be critical for the development of allergic sensitization. Objectives. This thesis aims: i) to describe allergic sensitization profiles in infants and children, ii) to assess the link between these sensitization profiles and allergic morbidity, and iii) to identify risk factors for allergic sensitization. Methods. This work concerns children involved in the Pollution and Asthma Risk: an Infant Study (PARIS) population-based prospective birth cohort. Allergic sensitization was assessed in 1,860 18-month-old infants and 1,007 8/9-year-old children by specific IgE measurements towards 16 and 19 allergens, respectively. Lifelong health and living condition data were collected by repeated standardized questionnaires. Sensitization profiles and morbidity profiles were identified using unsupervised classification, and related to each other by multinomial logistic regression. Finally, risk factors for early allergic sensitization were assessed by multivariate logistic regression. Results. As soon as 18 months of age, 13.8% of children were sensitized and 6.2%, multi-sensitized. When 8/9 years old, corresponding prevalence was 34.5% and 19.8%, respectively. Sensitization profiles identified in infancy (3) and in childhood (5) differed in terms of allergic morbidity. Risk factor analysis allowed to clarify the role of early exposure to allergens and microorganisms on allergic sensitization. Conclusion. This thesis improves the natural history of allergic sensitization understanding, as soon as the first years of life. This knowledge is essential for subsequent disease preventing.
|
10 |
Quantifying Trust in Wearable Medical DevicesThomas, Mini January 2024 (has links)
This thesis explores a methodology to quantify trust in wearable medical devices (WMD) by addressing two main challenges: identifying key factors influencing trust and developing a formal framework for precise trust quantification under uncertainty. The work empirically validates trust factors and uses a Bayesian network to quantify trust. The thesis further employs a data-driven approach to estimate Bayesian parameters, facilitating query-based inference and validating the trust model with real and synthetic datasets, culminating in a customizable parameterized trust evaluation prototype for WMD. / Advances in sensor and digital communication technologies have revolutionized the capabilities of wearable medical device (WMD) to monitor patients’ health remotely, raising growing concerns about trust in these devices. There is a need to quantify trust in WMD for their continued acceptance and adoption by different users. Quantifying trust in WMD poses two significant challenges due to their subjective and stochastic nature. The first challenge is identifying the factors that influence trust in WMD, and the second is developing a formal framework for precise quantification of trust while taking into account the uncertainty and variability of trust factors. This thesis proposes a methodology to quantify trust in WMD, addressing these challenges.
In this thesis, first, we devise a method to empirically validate dominant factors that influence the trustworthiness of WMD from the perspective of device users. We identified the users’ awareness of trust factors reported in the literature and additional user concerns influencing their trust. These factors are stepping stones for defining the specifications and quantification of trust in WMD.
Second, we develop a probabilistic graph using Bayesian network to quantify trust in WMD. Using the Bayesian network, the stochastic nature of trust is viewed in terms of probabilities as subjective degrees of belief by a set of random variables in the domain. We define each random variable in the network by the trust factors that are identified from the literature and validated by our empirical study. We construct the trust structure as an acyclic-directed graph to represent the relationship between the variables compactly and transparently. We set the inter-node relationships,
using the goal refinement technique, by refining a high-level goal of trustworthiness to lower-level goals that can be objectively implemented as measurable factors.
Third, to learn and estimate the parameters of the Bayesian network, we need access to the probabilities of all nodes, as assuming a uniform Gaussian distribution or using values based on expert opinions may not fully represent the complexities of the factors influencing trust. We propose a data-driven approach to generate priors and estimate Bayesian parameters, in which we use data collected from WMD for all the measurable factors (nodes) to generate priors. We use non-functional requirement engineering techniques to quantify the impacts between the node
relationships in the Bayesian network. We design propagation rules to aggregate the quantified relationships within the nodes of the network. This approach facilitates the computation of conditional probability distributions and enables query-based inference on any node, including the high-level trust node, based on the given evidence.
The results of this thesis are evaluated through several experimental validations. The factors influencing trust in WMD are empirically validated by an extensive survey of 187 potential users. The learnability, and generalizability of the proposed trust network are validated with a real dataset collected from three users of WMD in two conditions, performing predefined activities and performing regular daily activities. To extend the variability of conditions, we generated an extensive and representative synthetic dataset and validated the trust network accordingly. Finally, to test the practicality of our approach, we implemented a user-configurable, parameterized prototype that allows users of WMD to construct a customizable trust network and effectively compare the trustworthiness of different devices. The prototype enables the healthcare industry to adapt and adopt this method to evaluate the trustworthiness of WMD for their own specific
use cases. / Thesis / Doctor of Philosophy (PhD) / In this thesis, two challenges in quantifying trust in wearable medical devices, are addressed. The first challenge is the identification of factors influencing trust which are inherently subjective and vary widely among users. To address this challenge, we conducted an extensive survey to identify and validate the trust factors. These factors are stepping stones for defining the specifications and quantifying trust in wearable medical devices.
The second challenge is to develop a precise method for quantification of trust while taking
into account the uncertainty and variability of trust factors. We constructed a Bayesian network, that captures the complexities of trust as probabilities of the trust factors (identified from the survey) and developed a data-driven approach to estimate the parameters of the Bayesian network to compute the measure of trust.
The findings of this thesis are empirically and experimentally validated across multiple use
cases, incorporating real and synthetic data, various testing conditions, and diverse Bayesian network configurations. Additionally, we developed a customizable, parameterized prototype that empowers users and healthcare providers to effectively assess and compare the trustworthiness of different wearable medical devices.
|
Page generated in 0.041 seconds