• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 9
  • 9
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Factors Affecting Wind Erosion in Southern Utah

Ozturk, Mehmet 01 August 2019 (has links)
Wind erosion is a global issue and affecting millions of people in drylands by causing environmental issues (acceleration of snow melting), public health concerns (respiratory diseases), and socioeconomic problems (costs of damages and cleaning public properties after dust storms). Disturbances in drylands can be irreversible, thus leading to natural disasters such as the 1930s Dust Bowl. With increasing attention on aeolian studies, many studies have been conducted using ground-based measurements or wind tunnel studies. Ground-based measurements are important for validating model predictions and testing the effect and interactions of different factors known to affect wind erosion. Here, a machine-learning model (random forest) was used to describe sediment flux as a function of wind speed, soil moisture, precipitation, soil roughness, soil crusts, and soil texture. Model performance was compared to previous results before analyzing four new years of sediment flux data and including estimates of soil moisture to the model. The random forest model provided a better result than a regression tree with a higher variance explained (7.5% improvement). With additional soil moisture data, the model performance increased by 13.13%. With full dataset, the model provided an increase of 30.50% in total performance compared to the previous study. This research was one of the rare studies which represented a large-scale network of BSNEs and a long time series of data to quantify seasonal sediment flux under different soil covers in southern Utah. The results will also be helpful to the managers for controlling the effects on wind erosion, scientists to choose variables for further modeling or local people to increase the public awareness about the effects of wind erosion.
2

Le rôle des facteurs environnementaux sur la concentration des métaux-tracesdans les lacs urbains -Lac de Pampulha, Lac de Créteil et 49 lacs péri-urbains d’Ile de France / The role of environmental factors on trace-metalconcentrations in urban lakes - Lake Pampulha, Lake Créteil and 49 lakes in the Ile-de-France region

Tran khac, Viet 19 December 2016 (has links)
Les lacs jouent un rôle particulier dans le cycle de l’eau dans les bassins versants urbains. La stratification thermique et le temps de séjour de l’eau élevé favorisent le développement phytoplanctonique. La plupart des métaux sont naturellement présents dans l’environnement à l’état de traces. Ils sont essentiels pour les organismes vivants. Néanmoins, certains métaux sont connus pour leurs effets toxiques sur les animaux et les humains. La concentration totale des métaux ne reflète pas leur toxicité. Elle dépend de leurs propriétés et de leur spéciation (fractions particulaires, dissoutes: labiles ou biodisponibles et inertes). Dans les systèmes aquatiques, les métaux peuvent être absorbés par des ligands organiques ou minéraux. Leur capacité à se complexer avec la matière organique dissoute (MOD), particulièrement les substances humiques, a été largement étudiée. Dans les lacs, le développement phytoplanctonique peut produire de la MOD non-humique, connue pour sa capacité complexante des métaux. Pourtant, peu de recherche sur la spéciation des métaux dans la colonne d’eau des lacs urbains a été réalisée jusqu’à présent.Les objectifs principaux de cette thèse sont (1) d’obtenir une base de données fiables des concentrations en métaux traces dans la colonne d’eau de lacs urbains représentatifs; (2) d’évaluer leur biodisponibilité via une technique de spéciation adéquate ; (3) d’analyser leur évolution saisonnière et spatiale et leur spéciation; (4) d’étudier l’impact des variables environnementales, en particulier de la MOD autochtone sur leur biodisponibilité; (5) de lier la concentration des métaux au mode d’occupation du sol du bassin versant.Notre méthodologie est basée sur un suivi in-situ des lacs en complément d’analyses spécifiques en laboratoire. L’étude a été conduite sur trois sites: le lac de Créteil (France), le lac de Pampulha (Brésil) et 49 lacs péri-urbains (Ile de France). Sur le lac de Créteil, plusieurs dispositifs de mesure en continu nous ont fourni une partie de la base de données limnologiques. Dans le bassin versant du lac de Pampulha, la pression anthropique est très importante. Le climat et le régime hydrologique des 2 lacs sont très différents. Les 49 lacs de la région d’Ile de France ont été échantillonnés une fois pendant trois étés successifs (2011-2013). Ces lacs nous ont fourni une base de données synoptique, représentative de la contamination métallique à l’échelle d’une région fortement anthropisée.Afin d’expliquer le rôle des variables environnementales sur la concentration métallique, le modèle Random Forest a été appliqué sur les bases de données du lac de Pampulha et des 49 lacs urbains avec 2 objectifs spécifiques: (1) dans le lac de Pampulha, comprendre le rôle des variables environnementales sur la fraction labile des métaux traces, potentiellement biodisponible et (2) dans les 49 lacs, comprendre la relation des variables environnementales, particulièrement au niveau du bassin versant, sur la concentration dissoute des métaux. L’analyse des relations entre métaux et variables environnementales constitue l’un des principaux résultats de cette thèse. Dans le lac de Pampulha, environ 80% de la variance du cobalt labile est expliqué par des variables limnologiques: Chla, O2, pH et P total. Pour les autres métaux, le modèle n’a pas réussi à expliquer plus de 50 % de la relation entre fraction labile et variables limnologiques. Dans les 49 lacs, le modèle Random Forest a donné un bon résultat pour le cobalt (60% de la variance expliquée) et un très bon résultat pour le nickel (86% de la variance expliquée). Pour Ni les variables explicatives sont liées au mode d’occupation du sol : « Activités » (Equipements pour l’eau et l’assainissement, entrepôts logistiques, bureaux…) et « Décharge ». Ce résultat est en accord avec le cas du lac de Créteil où la concentration en Ni dissous est très élevée et où les catégories d’occupation du sol « Activités » et « Décharges » sont dominantes / Lakes have a particular influence on the water cycle in urban catchments. Thermal stratification and a longer water residence time in the lake boost the phytoplankton production. Most metals are naturally found in the environment in trace amounts. Trace metals are essential to growth and reproduction of organisms. However, some are also well known for their toxic effects on animals and humans. Total metal concentrations do not reflect their ecotoxicity that depends on their properties and speciation (particulate, dissolved: labile or bioavailable and inert fractions). Trace metals can be adsorbed to various components in aquatic systems including inorganic and organic ligands. The ability of metal binding to dissolved organic matter (DOM), in particular humic substances, has been largely studied. In urban lakes, the phytoplankton development can produce autochthonous DOM, non humic substances that can have the ability of metal binding.. But there are few studies about trace metal speciation in lake water column.The main objectives of this thesis are (1) to obtain a consistent database of trace metal concentrations in the water column of representative urban lakes; (2) to access their bioavailability through an adapted speciation technique; (3) to analyze the seasonal and spatial evolution of the metals and their speciation; (4) to study the potential impact of environmental variables, particularly of dissolved organic matter related to phytoplankton production on metal bioavailability and (5) to link the metal concentrations to the land use in the lake watershed.Our methodology is based on a dense field survey of the water bodies in addition to specific laboratory analysis. The research has been conducted on three study sites: Lake Créteil (France), Lake Pampulha (Brazil) and a panel of 49 peri-urban lakes (Ile de France). Lake Créteil is an urban lake impacted by anthropogenic pollution. It benefits of a large number of monitoring equipment, which allowed us to collect a part of the data set. In Lake Pampulha catchment, the anthropogenic pressure is high. Lake Pampulha has to face with many pollution point and non-point sources. The climate and limnological characteristics of the lakes are also very different. The panel of 49 lakes of Ile de France was sampled once during three successive summers (2011-2013); they provided us with a synoptic, representative data set of the regional metal contamination in a densely anthropized region.In order to explain the role of the environmental variables on the metal concentrations, we applied the Random Forest model on the Lake Pampulha dataset and on the 49 urban lake dataset with 2 specific objectives: (1) in Lake Pampulha, understanding the role of environmental variables on the trace metal labile concentration, considered as potentially bioavailable and (2) in the 49 lakes, understanding the relationship of the environmental variables, more particularly the watershed variables, on the dissolved metal concentrations. The analysis of the relationships between the trace metal speciation and the environmental variables provided the following key results of this thesis.In Lake Pampulha, around 80% of the variance of the labile cobalt is explained by some limnological variables: Chl a, O2, pH, and total phosphorus. For the other metals, the RF model did not succeed in explaining more than 50% of the relationships between the metals and the limnological variables.In the 49 urban lakes in Ile de France, the RF model gave a good result for Co (66% of explained variance) and very satisfying for Ni (86% of explained variance). For Ni, the best explanatory variables are landuse variables such as “activities” (facilities for water, sanitation and energy, logistical warehouses, shops, office…) and “landfill”. This result fits with Lake Creteil where dissolved Ni concentration is particularly high and where the “activities” and “landfill” landuse categories are the highest
3

Process monitoring and fault diagnosis using random forests

Auret, Lidia 12 1900 (has links)
Thesis (PhD (Process Engineering))--University of Stellenbosch, 2010. / Dissertation presented for the Degree of DOCTOR OF PHILOSOPHY (Extractive Metallurgical Engineering) in the Department of Process Engineering at the University of Stellenbosch / ENGLISH ABSTRACT: Fault diagnosis is an important component of process monitoring, relevant in the greater context of developing safer, cleaner and more cost efficient processes. Data-driven unsupervised (or feature extractive) approaches to fault diagnosis exploit the many measurements available on modern plants. Certain current unsupervised approaches are hampered by their linearity assumptions, motivating the investigation of nonlinear methods. The diversity of data structures also motivates the investigation of novel feature extraction methodologies in process monitoring. Random forests are recently proposed statistical inference tools, deriving their predictive accuracy from the nonlinear nature of their constituent decision tree members and the power of ensembles. Random forest committees provide more than just predictions; model information on data proximities can be exploited to provide random forest features. Variable importance measures show which variables are closely associated with a chosen response variable, while partial dependencies indicate the relation of important variables to said response variable. The purpose of this study was therefore to investigate the feasibility of a new unsupervised method based on random forests as a potentially viable contender in the process monitoring statistical tool family. The hypothesis investigated was that unsupervised process monitoring and fault diagnosis can be improved by using features extracted from data with random forests, with further interpretation of fault conditions aided by random forest tools. The experimental results presented in this work support this hypothesis. An initial study was performed to assess the quality of random forest features. Random forest features were shown to be generally difficult to interpret in terms of geometry present in the original variable space. Random forest mapping and demapping models were shown to be very accurate on training data, and to extrapolate weakly to unseen data that do not fall within regions populated by training data. Random forest feature extraction was applied to unsupervised fault diagnosis for process data, and compared to linear and nonlinear methods. Random forest results were comparable to existing techniques, with the majority of random forest detections due to variable reconstruction errors. Further investigation revealed that the residual detection success of random forests originates from the constrained responses and poor generalization artifacts of decision trees. Random forest variable importance measures and partial dependencies were incorporated in a visualization tool to allow for the interpretation of fault conditions. A dynamic change point detection application with random forests proved more successful than an existing principal component analysis-based approach, with the success of the random forest method again residing in reconstruction errors. The addition of random forest fault diagnosis and change point detection algorithms to a suite of abnormal event detection techniques is recommended. The distance-to-model diagnostic based on random forest mapping and demapping proved successful in this work, and the theoretical understanding gained supports the application of this method to further data sets. / AFRIKAANSE OPSOMMING: Foutdiagnose is ’n belangrike komponent van prosesmonitering, en is relevant binne die groter konteks van die ontwikkeling van veiliger, skoner en meer koste-effektiewe prosesse. Data-gedrewe toesigvrye of kenmerkekstraksie-benaderings tot foutdiagnose benut die vele metings wat op moderne prosesaanlegte beskikbaar is. Party van die huidige toesigvrye benaderings word deur aannames rakende liniariteit belemmer, wat as motivering dien om nie-liniêre metodes te ondersoek. Die diversiteit van datastrukture is ook verdere motivering vir ondersoek na nuwe kenmerkekstraksiemetodes in prosesmonitering. Lukrake-woude is ’n nuwe statistiese inferensie-tegniek, waarvan die akkuraatheid toegeskryf kan word aan die nie-liniêre aard van besluitnemingsboomlede en die bekwaamheid van ensembles. Lukrake-woudkomitees verskaf meer as net voorspellings; modelinligting oor datapuntnabyheid kan benut word om lukrakewoudkenmerke te verskaf. Metingbelangrikheidsaanduiers wys watter metings in ’n noue verhouding met ’n gekose uitsetveranderlike verkeer, terwyl parsiële afhanklikhede aandui wat die verhouding van ’n belangrike meting tot die gekose uitsetveranderlike is. Die doel van hierdie studie was dus om die uitvoerbaarheid van ’n nuwe toesigvrye metode vir prosesmonitering gebaseer op lukrake-woude te ondersoek. Die ondersoekte hipotese lui: toesigvrye prosesmonitering en foutdiagnose kan verbeter word deur kenmerke te gebruik wat met lukrake-woude geëkstraheer is, waar die verdere interpretasie van foutkondisies deur addisionele lukrake-woude-tegnieke bygestaan word. Eksperimentele resultate wat in hierdie werkstuk voorgelê is, ondersteun hierdie hipotese. ’n Intreestudie is gedoen om die gehalte van lukrake-woudkenmerke te assesseer. Daar is bevind dat dit moeilik is om lukrake-woudkenmerke in terme van die geometrie van die oorspronklike metingspasie te interpreteer. Verder is daar bevind dat lukrake-woudkartering en -dekartering baie akkuraat is vir opleidingsdata, maar dat dit swak ekstrapolasie-eienskappe toon vir ongesiene data wat in gebiede buite dié van die opleidingsdata val. Lukrake-woudkenmerkekstraksie is in toesigvrye-foutdiagnose vir gestadigde-toestandprosesse toegepas, en is met liniêre en nie-liniêre metodes vergelyk. Resultate met lukrake-woude is vergelykbaar met dié van bestaande metodes, en die meerderheid lukrake-woudopsporings is aan metingrekonstruksiefoute toe te skryf. Verdere ondersoek het getoon dat die sukses van res-opsporing op die beperkte uitsetwaardes en swak veralgemenende eienskappe van besluitnemingsbome berus. Lukrake-woude-metingbelangrikheidsaanduiers en parsiële afhanklikhede is ingelyf in ’n visualiseringstegniek wat vir die interpretasie van foutkondisies voorsiening maak. ’n Dinamiese aanwending van veranderingspuntopsporing met lukrake-woude is as meer suksesvol bewys as ’n bestaande metode gebaseer op hoofkomponentanalise. Die sukses van die lukrake-woudmetode is weereens aan rekonstruksie-reswaardes toe te skryf. ’n Voorstel wat na aanleiding van hierde studie gemaak is, is dat die lukrake-woudveranderingspunt- en foutopsporingsmetodes by ’n soortgelyke stel metodes gevoeg kan word. Daar is in hierdie werk bevind dat die afstand-vanaf-modeldiagnostiek gebaseer op lukrake-woudkartering en -dekartering suksesvol is vir foutopsporing. Die teoretiese begrippe wat ontsluier is, ondersteun die toepassing van hierdie metodes op verdere datastelle.
4

Discharge-Suspended Sediment Relations: Near-channel Environment Controls Shape and Steepness, Land Use Controls Median and Low Flow Conditions

Vaughan, Angus A. 01 May 2016 (has links)
We analyzed recent total suspended solids (TSS) data from 45 gages on 36 rivers throughout the state of Minnesota. Watersheds range from 32 to 14,600 km2 and represent a variety of distinct settings in terms of topography, land cover, and geologic history. Our study rivers exhibited three distinct patterns in the relationship between discharge and TSS: simple power functions, threshold power functions, and peaked or negative power functions. Differentiating rising and falling limb samples, we generated sediment rating curves (SRC) of form TSS = aQb, Q being normalized discharge. Rating parameters a and b describe the vertical offset and steepness of the relationships. We also used the fitted SRCs to estimate TSS values at low flows and to quantify event-scale hysteresis. In addition to quantifying the watershed-average topographic, climatic/hydrologic, geologic, soil and land cover conditions, we used high-resolution lidar topography data to characterize the near-channel environment upstream of gages. We used Random Forest statistical models to analyze the relationship between basin and channel features and the rating parameters. The models enabled us to identify morphometric variables that provided the greatest explanatory power and examine the direction, form, and strength of the partial dependence of the response variables on individual predictor variables. The models explained between 43% and 60% of the variance in the rating curve parameters and determined that Q-TSS relation steepness (exponent) was most related to near-channel morphological characteristics including near-channel local relief, channel gradient, and proportion of lakes along the channel network. Land use within the watershed explained most variation in the vertical offset (coefficient) of the SRCs and in TSS concentrations at low flows.
5

Interpretation, identification and reuse of models : theory and algorithms with applications in predictive toxicology

Palczewska, Anna Maria January 2014 (has links)
This thesis is concerned with developing methodologies that enable existing models to be effectively reused. Results of this thesis are presented in the framework of Quantitative Structural-Activity Relationship (QSAR) models, but their application is much more general. QSAR models relate chemical structures with their biological, chemical or environmental activity. There are many applications that offer an environment to build and store predictive models. Unfortunately, they do not provide advanced functionalities that allow for efficient model selection and for interpretation of model predictions for new data. This thesis aims to address these issues and proposes methodologies for dealing with three research problems: model governance (management), model identification (selection), and interpretation of model predictions. The combination of these methodologies can be employed to build more efficient systems for model reuse in QSAR modelling and other areas. The first part of this study investigates toxicity data and model formats and reviews some of the existing toxicity systems in the context of model development and reuse. Based on the findings of this review and the principles of data governance, a novel concept of model governance is defined. Model governance comprises model representation and model governance processes. These processes are designed and presented in the context of model management. As an application, minimum information requirements and an XML representation for QSAR models are proposed. Once a collection of validated, accepted and well annotated models is available within a model governance framework, they can be applied for new data. It may happen that there is more than one model available for the same endpoint. Which one to chose? The second part of this thesis proposes a theoretical framework and algorithms that enable automated identification of the most reliable model for new data from the collection of existing models. The main idea is based on partitioning of the search space into groups and assigning a single model to each group. The construction of this partitioning is difficult because it is a bi-criteria problem. The main contribution in this part is the application of Pareto points for the search space partition. The proposed methodology is applied to three endpoints in chemoinformatics and predictive toxicology. After having identified a model for the new data, we would like to know how the model obtained its prediction and how trustworthy it is. An interpretation of model predictions is straightforward for linear models thanks to the availability of model parameters and their statistical significance. For non linear models this information can be hidden inside the model structure. This thesis proposes an approach for interpretation of a random forest classification model. This approach allows for the determination of the influence (called feature contribution) of each variable on the model prediction for an individual data. In this part, there are three methods proposed that allow analysis of feature contributions. Such analysis might lead to the discovery of new patterns that represent a standard behaviour of the model and allow additional assessment of the model reliability for new data. The application of these methods to two standard benchmark datasets from the UCI machine learning repository shows a great potential of this methodology. The algorithm for calculating feature contributions has been implemented and is available as an R package called rfFC.
6

Interpretation, Identification and Reuse of Models. Theory and algorithms with applications in predictive toxicology.

Palczewska, Anna Maria January 2014 (has links)
This thesis is concerned with developing methodologies that enable existing models to be effectively reused. Results of this thesis are presented in the framework of Quantitative Structural-Activity Relationship (QSAR) models, but their application is much more general. QSAR models relate chemical structures with their biological, chemical or environmental activity. There are many applications that offer an environment to build and store predictive models. Unfortunately, they do not provide advanced functionalities that allow for efficient model selection and for interpretation of model predictions for new data. This thesis aims to address these issues and proposes methodologies for dealing with three research problems: model governance (management), model identification (selection), and interpretation of model predictions. The combination of these methodologies can be employed to build more efficient systems for model reuse in QSAR modelling and other areas. The first part of this study investigates toxicity data and model formats and reviews some of the existing toxicity systems in the context of model development and reuse. Based on the findings of this review and the principles of data governance, a novel concept of model governance is defined. Model governance comprises model representation and model governance processes. These processes are designed and presented in the context of model management. As an application, minimum information requirements and an XML representation for QSAR models are proposed. Once a collection of validated, accepted and well annotated models is available within a model governance framework, they can be applied for new data. It may happen that there is more than one model available for the same endpoint. Which one to chose? The second part of this thesis proposes a theoretical framework and algorithms that enable automated identification of the most reliable model for new data from the collection of existing models. The main idea is based on partitioning of the search space into groups and assigning a single model to each group. The construction of this partitioning is difficult because it is a bi-criteria problem. The main contribution in this part is the application of Pareto points for the search space partition. The proposed methodology is applied to three endpoints in chemoinformatics and predictive toxicology. After having identified a model for the new data, we would like to know how the model obtained its prediction and how trustworthy it is. An interpretation of model predictions is straightforward for linear models thanks to the availability of model parameters and their statistical significance. For non linear models this information can be hidden inside the model structure. This thesis proposes an approach for interpretation of a random forest classification model. This approach allows for the determination of the influence (called feature contribution) of each variable on the model prediction for an individual data. In this part, there are three methods proposed that allow analysis of feature contributions. Such analysis might lead to the discovery of new patterns that represent a standard behaviour of the model and allow additional assessment of the model reliability for new data. The application of these methods to two standard benchmark datasets from the UCI machine learning repository shows a great potential of this methodology. The algorithm for calculating feature contributions has been implemented and is available as an R package called rfFC. / BBSRC and Syngenta (International Research Centre at Jealott’s Hill, Bracknell, UK).
7

Regression Model to Project and Mitigate Vehicular Emissions in Cochabamba, Bolivia

Wagner, Christopher 28 August 2017 (has links)
No description available.
8

Marginal agricultural land identification in the Lower Mississippi Alluvial Valley

Tiwari, Prakash 12 May 2023 (has links) (PDF)
This study identified marginal agricultural lands in the Lower Mississippi Alluvial Valley using crop yield predicting models. The Random Forest Regression (RFR) and Multiple Linear Regression (MLR) models were trained and validated using county-level crop yield data, climate data, soil properties, and Normalized Difference Vegetation Index (NDVI). The RFR model outperformed MLR model in estimating soybean and corn yields, with an index of agreement (d) of 0.98 and 0.96, Nash-Sutcliffe model efficiency (NSE) of 0.88 and 0.93, and root mean square error (RMSE) of 9.34% and 5.84%, respectively. Marginal agricultural lands were estimated to 26,366 hectares using cost and sales price in 2021 while they were estimated to 623,566 hectares using average cost and sales price from 2016 to 2021. The results provide valuable information for land use planners and farmers to update field crops and plan alternative land uses that can generate higher returns while conserving these marginal lands.
9

Automated Learning and Decision : Making of a Smart Home System

Karlsson, Daniel, Lindström, Alex January 2018 (has links)
Smart homes are custom-fitted systems for users to manage their home environments. Smart homes consist of devices which has the possibility to communicate between each other. In a smart home system, the communication is used by a central control unit to manage the environment and the devices in it. Setting up a smart home today involves a lot of manual customizations to make it function as the user wishes. What smart homes lack is the possibility to learn from users behaviour and habits in order to provide a customized environment for the user autonomously. The purpose of this thesis is to examine whether environmental data can be collected and used in a small smart home system to learn about the users behaviour. To collect data and attempt this learning process, a system is set up. The system uses a central control unit for mediation between wireless electrical outlets and sensors. The sensors track motion, light, temperature as well as humidity. The devices and sensors along with user interactions in the environment make up the collected data. Through studying the collected data, the system is able to create rules. These rules are used for the system to make decisions within its environment to suit the users’ needs. The performance of the system varies depending on how the data collection is handled. Results find that collecting data in intervals as well as when an action is made from the user is important. / Smarta hem är system avsedda för att hjälpa användare styra sin hemmiljö. Ett smart hem är uppbyggt av enheter med möjlighet att kommunicera med varandra. För att kontrollera enheterna i ett smart hem, används en central styrenhet. Att få ett smart hem att vara anpassat till användare är ansträngande och tidskrävande. Smarta hemsystem saknar i stor utsträckning möjligheten att lära sig av användarens beteende. Vad ett sådant lärande skulle kunna möjliggöra är ett skräddarsytt system utan användarens involvering. Syftet med denna avhandling är att undersöka hur användardata från en hemmiljö kan användas i ett smart hemsystem för att lära sig av användarens beteende. Ett litet smart hemsystem har skapats för att studera ifall denna inlärningsmetod är applicerbar. Systemet består av sensorer, trådlösa eluttag och en central styrenhet. Den centrala styrenheten används för att kontrollera de olika enheterna i miljön. Sensordata som sparas av systemet består av rörelse, ljusstyrka, temperatur och luftfuktighet. Systemet sparar även användarens beteende i miljön. Systemet skapar regler utifrån sparad data med målet att kunna styra enheterna i miljön på ett sätt som passar användaren. Systemets agerande varierade beroende på hur data samlades in. Resultatet visar vikten av att samla in data både i intervaller och när användare tar ett beslut i miljön.

Page generated in 0.0701 seconds