• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 18
  • 8
  • 6
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 116
  • 65
  • 56
  • 49
  • 47
  • 47
  • 44
  • 43
  • 38
  • 31
  • 30
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Reliable Information Exchange in IIoT : Investigation into the Role of Data and Data-Driven Modelling

Lavassani, Mehrzad January 2018 (has links)
The concept of Industrial Internet of Things (IIoT) is the tangible building block for the realisation of the fourth industrial revolution. It should improve productivity, efficiency and reliability of industrial automation systems, leading to revenue growth in industrial scenarios. IIoT needs to encompass various disciplines and technologies to constitute an operable and harmonious system. One essential requirement for a system to exhibit such behaviour is reliable exchange of information. In industrial automation, the information life-cycle starts at the field level, with data collected by sensors, and ends at the enterprise level, where that data is processed into knowledge for business decision making. In IIoT, the process of knowledge discovery is expected to start in the lower layers of the automation hierarchy, and to cover the data exchange between the connected smart objects to perform collaborative tasks. This thesis aims to assist the comprehension of the processes for information exchange in IIoT-enabled industrial automation- in particular, how reliable exchange of information can be performed by communication systems at field level given an underlying wireless sensor technology, and how data analytics can complement the processes of various levels of the automation hierarchy. Furthermore, this work explores how an IIoT monitoring system can be designed and developed. The communication reliability is addressed by proposing a redundancy-based medium access control protocol for mission-critical applications, and analysing its performance regarding real-time and deterministic delivery. The importance of the data and the benefits of data analytics for various levels of the automation hierarchy are examined by suggesting data-driven methods for visualisation, centralised system modelling and distributed data streams modelling. The design and development of an IIoT monitoring system are addressed by proposing a novel three-layer framework that incorporates wireless sensor, fog, and cloud technologies. Moreover, an IIoT testbed system is developed to realise the proposed framework. The outcome of this study suggests that redundancy-based mechanisms improve communication reliability. However, they can also introduce drawbacks, such as poor link utilisation and limited scalability, in the context of IIoT. Data-driven methods result in enhanced readability of visualisation, and reduced necessity of the ground truth in system modelling. The results illustrate that distributed modelling can lower the negative effect of the redundancy-based mechanisms on link utilisation, by reducing the up-link traffic. Mathematical analysis reveals that introducing fog layer in the IIoT framework removes the single point of failure and enhances scalability, while meeting the latency requirements of the monitoring application. Finally, the experiment results show that the IIoT testbed works adequately and can serve for the future development and deployment of IIoT applications. / SMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)
232

Properties of binary oxides:a DFT study

Miroshnichenko, O. (Olga) 14 June 2019 (has links)
Abstract Titanium dioxide nanoparticles are used in an enormous amount of applications. Their properties are different from bulk TiO₂ and are affected by adsorbates that are unavoidably present on the surface. In this thesis, the effect of OH and SO₄ groups (the adsorbants present on the surface during manufacturing) on the properties of anatase-structured TiO₂ nanoparticles is studied. It was found that the above mentioned groups change both the geometric and electronic structure of nanoparticles, resulting in changes in the photoabsorption spectrum. Bader charges are calculated using electron density from Density Functional Theory calculations. They can be used for determination of the oxidation state of the atom. The relation between computed partial charges and oxidation states for binary oxides using data from open materials database has been demonstrated in this work using a linear regression. The applicability of the oxidation state determination by Bader charges for mixed valence compounds and surfaces is considered. / Tiivistelmä Titaanidioksidinanopartikkeleita käytetään lukuisissa sovelluksissa. Niiden ominaisuudet poikkeavat kiinteän TiO₂:n ominaisuuksista, ja niihin vaikuttavat pinnalle väistämättä absorboituvat aineet. Tässä työssä on tutkittu OH- ja SO₄-ryhmien vaikutusta anataasirakenteisten TiO₂-nanopartikkelien ominaisuuksiin. Tällaisia ryhmiä esiintyy yleisesti nanopartikkelien pinnalla valmistusprosessien aikana. Työssä havaittiin, että nämä ryhmät muuttavat nanopartikkelien rakenteellisia ja sähköisiä ominaisuuksia, ja siten vaikuttavat myös fotoabsorptiospektriin. Baderin varaukset voidaan laskea käyttäen tiheysfunktionaaliteoriaan perustuvista laskuista saatavaa elektronitiheyttä. Niitä voidaan käyttää atomin hapetustilan laskemiseen. Tässä työssä on osoitettu, että binääristen oksidien tapauksessa laskettujen osittaisvarauksien ja hapetustilan välillä on yhteys. Tämä yhteys voitiin osoittaa käyttämällä lineaarista regressiota. Työssä tarkastellaan myös menetelmän soveltuvuutta hapetustilojen määrittämiseen sekavalenssiyhdisteille ja pinnoille. / Original papers Original publications are not included in the electronic version of the dissertation. Miroshnichenko O., Auvinen S., & Alatalo M. (2015). A DFT study of the effect of OH groups on the optical, electronic, and structural properties of TiO₂ nanoparticles. Phys. Chem. Chem. Phys., 17, 5321–5327. https://doi.org/10.1039/c4cp02789b Miroshnichenko O., Posysaev S., & Alatalo M. (2016). A DFT study of the effect of SO4 groups on the properties of TiO₂ nanoparticles. Phys. Chem. Chem. Phys., 18, 33068–33076. https://doi.org/10.1039/c6cp05681d http://jultika.oulu.fi/Record/nbnfi-fe201707037608 Posysaev S., Miroshnichenko O., Alatalo M., Le D., & Rahman T.S. (2019). Oxidation states of binary oxides from data analytics of the electronic structure. Comput. Mater. Sci., 161, 403–414. https://doi.org/10.1016/j.commatsci.2019.01.046
233

Feedback-Driven Data Clustering

Hahmann, Martin 28 February 2014 (has links) (PDF)
The acquisition of data and its analysis has become a common yet critical task in many areas of modern economy and research. Unfortunately, the ever-increasing scale of datasets has long outgrown the capacities and abilities humans can muster to extract information from them and gain new knowledge. For this reason, research areas like data mining and knowledge discovery steadily gain importance. The algorithms they provide for the extraction of knowledge are mandatory prerequisites that enable people to analyze large amounts of information. Among the approaches offered by these areas, clustering is one of the most fundamental. By finding groups of similar objects inside the data, it aims to identify meaningful structures that constitute new knowledge. Clustering results are also often used as input for other analysis techniques like classification or forecasting. As clustering extracts new and unknown knowledge, it obviously has no access to any form of ground truth. For this reason, clustering results have a hypothetical character and must be interpreted with respect to the application domain. This makes clustering very challenging and leads to an extensive and diverse landscape of available algorithms. Most of these are expert tools that are tailored to a single narrowly defined application scenario. Over the years, this specialization has become a major trend that arose to counter the inherent uncertainty of clustering by including as much domain specifics as possible into algorithms. While customized methods often improve result quality, they become more and more complicated to handle and lose versatility. This creates a dilemma especially for amateur users whose numbers are increasing as clustering is applied in more and more domains. While an abundance of tools is offered, guidance is severely lacking and users are left alone with critical tasks like algorithm selection, parameter configuration and the interpretation and adjustment of results. This thesis aims to solve this dilemma by structuring and integrating the necessary steps of clustering into a guided and feedback-driven process. In doing so, users are provided with a default modus operandi for the application of clustering. Two main components constitute the core of said process: the algorithm management and the visual-interactive interface. Algorithm management handles all aspects of actual clustering creation and the involved methods. It employs a modular approach for algorithm description that allows users to understand, design, and compare clustering techniques with the help of building blocks. In addition, algorithm management offers facilities for the integration of multiple clusterings of the same dataset into an improved solution. New approaches based on ensemble clustering not only allow the utilization of different clustering techniques, but also ease their application by acting as an abstraction layer that unifies individual parameters. Finally, this component provides a multi-level interface that structures all available control options and provides the docking points for user interaction. The visual-interactive interface supports users during result interpretation and adjustment. For this, the defining characteristics of a clustering are communicated via a hybrid visualization. In contrast to traditional data-driven visualizations that tend to become overloaded and unusable with increasing volume/dimensionality of data, this novel approach communicates the abstract aspects of cluster composition and relations between clusters. This aspect orientation allows the use of easy-to-understand visual components and makes the visualization immune to scale related effects of the underlying data. This visual communication is attuned to a compact and universally valid set of high-level feedback that allows the modification of clustering results. Instead of technical parameters that indirectly cause changes in the whole clustering by influencing its creation process, users can employ simple commands like merge or split to directly adjust clusters. The orchestrated cooperation of these two main components creates a modus operandi, in which clusterings are no longer created and disposed as a whole until a satisfying result is obtained. Instead, users apply the feedback-driven process to iteratively refine an initial solution. Performance and usability of the proposed approach were evaluated with a user study. Its results show that the feedback-driven process enabled amateur users to easily create satisfying clustering results even from different and not optimal starting situations.
234

Desafios e oportunidades para a Fundação Seade: sua transformação e adaptação ao complexo e dinâmico ambiente das estatísticas oficiais

Leonardo, Fabrizio Clares, Calais, Gilson de Oliveira Silva, Coppede Junior, Wagner 01 December 2017 (has links)
Submitted by Gilson de Oliveira Silva Calais (gilcalais@terra.com.br) on 2017-12-18T17:32:40Z No. of bitstreams: 1 MPGPP - Trabalho FInal - SEADE - 01-12-2017.pdf: 2421044 bytes, checksum: c213c0a77d4d53cd5e4ce850af4db68a (MD5) / Approved for entry into archive by Mayara Costa de Sousa (mayara.sousa@fgv.br) on 2017-12-18T22:56:16Z (GMT) No. of bitstreams: 1 MPGPP - Trabalho FInal - SEADE - 01-12-2017.pdf: 2421044 bytes, checksum: c213c0a77d4d53cd5e4ce850af4db68a (MD5) / Made available in DSpace on 2017-12-19T11:39:42Z (GMT). No. of bitstreams: 1 MPGPP - Trabalho FInal - SEADE - 01-12-2017.pdf: 2421044 bytes, checksum: c213c0a77d4d53cd5e4ce850af4db68a (MD5) Previous issue date: 2017-12-01 / Elaborado ao longo de 2017, o presente estudo tem por objetivo a análise organizacional da Fundação Sistema Estadual de Análise de Dados (SEADE), entidade de direito público vinculada à Secretaria Estadual de Planejamento e Gestão, responsável pela produção e disseminação de análises e estatísticas socioeconômicas e demográficas do Estado de São Paulo. Frente ao atual contexto de mudanças do setor, devido aos impactos das novas tecnologias e, em especial, aos efeitos do Big Data Analytics, a gestão focada em processos e uma nova estrutura organizacional, elaborada com base em melhores práticas e em modelagem de processos de referência internacional, mostram-se fundamentais para assegurar os investimentos necessários para manter sua capacidade de produzir e disseminar informações estatísticas em alto nível e adequadas às necessidades e expectativas dos usuários. Para conciliar harmonicamente esses objetivos, recomenda-se um conjunto de ações, de caráter estratégico, que, além de objetivar melhor atendimento àquilo que lhe é prioritário, também implique em mais acessos por parte dos usuários, através de sua principal ferramenta de comunicação com o mercado. Tais achados e recomendações baseiam-se em uma revisão do modelo de negócio subscrito na configuração de seus processos operacionais e de apoio, no arranjo organizacional e na forma de sua comunicação com os usuários, além de responder às crescentes demandas por maior eficiência e transparência na gestão dos recursos públicos. / Elaborated in 2017, the present study aims the organizational analysis of the Fundação Sistema Estadual de Análise de Dados (SEADE), an entity of public law linked to the State Department of Planning and Management, responsible for the production and dissemination of analyzes and socioeconomic and demographic statistics of the State of São Paulo. Given the current context of changes in the industry, due to the impacts of new technologies and the effects of Big Data Analytics, the process-focused management and a new organizational structure, based on best practices and processes modeling of international reference, are essential to ensure the necessary investments to maintain their capacity to produce and disseminate statistical information at a high level and adapted to the needs and expectations of the users. To harmoniously reconcile these objectives, it is recommended a set of actions, of strategic nature, which, in addition to objectifying better attendance to its priority, also implies more access by the users through its main communication tool with the market. These findings and recommendations are based on a review of the business model underwritten in the configuration of its operational and support processes, in the organizational arrangement and in the form of its communication with the users, in addition to responding to the growing demands for greater efficiency and transparency in the management of public resources.
235

Interprétation sémantique d'images hyperspectrales basée sur la réduction adaptative de dimensionnalité / Semantic interpretation of hyperspectral images based on the adaptative reduction of dimensionality

Sellami, Akrem 11 December 2017 (has links)
L'imagerie hyperspectrale permet d'acquérir des informations spectrales riches d'une scène dans plusieurs centaines, voire milliers de bandes spectrales étroites et contiguës. Cependant, avec le nombre élevé de bandes spectrales, la forte corrélation inter-bandes spectrales et la redondance de l'information spectro-spatiale, l'interprétation de ces données hyperspectrales massives est l'un des défis majeurs pour la communauté scientifique de la télédétection. Dans ce contexte, le grand défi posé est la réduction du nombre de bandes spectrales inutiles, c'est-à-dire de réduire la redondance et la forte corrélation de bandes spectrales tout en préservant l'information pertinente. Par conséquent, des approches de projection visent à transformer les données hyperspectrales dans un sous-espace réduit en combinant toutes les bandes spectrales originales. En outre, des approches de sélection de bandes tentent à chercher un sous-ensemble de bandes spectrales pertinentes. Dans cette thèse, nous nous intéressons d'abord à la classification d'imagerie hyperspectrale en essayant d'intégrer l'information spectro-spatiale dans la réduction de dimensions pour améliorer la performance de la classification et s'affranchir de la perte de l'information spatiale dans les approches de projection. De ce fait, nous proposons un modèle hybride permettant de préserver l'information spectro-spatiale en exploitant les tenseurs dans l'approche de projection préservant la localité (TLPP) et d'utiliser l'approche de sélection non supervisée de bandes spectrales discriminantes à base de contraintes (CBS). Pour modéliser l'incertitude et l'imperfection entachant ces approches de réduction et les classifieurs, nous proposons une approche évidentielle basée sur la théorie de Dempster-Shafer (DST). Dans un second temps, nous essayons d'étendre le modèle hybride en exploitant des connaissances sémantiques extraites à travers les caractéristiques obtenues par l'approche proposée auparavant TLPP pour enrichir la sélection non supervisée CBS. En effet, l'approche proposée permet de sélectionner des bandes spectrales pertinentes qui sont à la fois informatives, discriminantes, distinctives et peu redondantes. En outre, cette approche sélectionne les bandes discriminantes et distinctives en utilisant la technique de CBS en injectant la sémantique extraite par les techniques d'extraction de connaissances afin de sélectionner d'une manière automatique et adaptative le sous-ensemble optimal de bandes spectrales pertinentes. La performance de notre approche est évaluée en utilisant plusieurs jeux des données hyperspectrales réelles. / Hyperspectral imagery allows to acquire a rich spectral information of a scene in several hundred or even thousands of narrow and contiguous spectral bands. However, with the high number of spectral bands, the strong inter-bands spectral correlation and the redundancy of spectro-spatial information, the interpretation of these massive hyperspectral data is one of the major challenges for the remote sensing scientific community. In this context, the major challenge is to reduce the number of unnecessary spectral bands, that is, to reduce the redundancy and high correlation of spectral bands while preserving the relevant information. Therefore, projection approaches aim to transform the hyperspectral data into a reduced subspace by combining all original spectral bands. In addition, band selection approaches attempt to find a subset of relevant spectral bands. In this thesis, firstly we focus on hyperspectral images classification attempting to integrate the spectro-spatial information into dimension reduction in order to improve the classification performance and to overcome the loss of spatial information in projection approaches.Therefore, we propose a hybrid model to preserve the spectro-spatial information exploiting the tensor model in the locality preserving projection approach (TLPP) and to use the constraint band selection (CBS) as unsupervised approach to select the discriminant spectral bands. To model the uncertainty and imperfection of these reduction approaches and classifiers, we propose an evidential approach based on the Dempster-Shafer Theory (DST). In the second step, we try to extend the hybrid model by exploiting the semantic knowledge extracted through the features obtained by the previously proposed approach TLPP to enrich the CBS technique. Indeed, the proposed approach makes it possible to select a relevant spectral bands which are at the same time informative, discriminant, distinctive and not very redundant. In fact, this approach selects the discriminant and distinctive spectral bands using the CBS technique injecting the extracted rules obtained with knowledge extraction techniques to automatically and adaptively select the optimal subset of relevant spectral bands. The performance of our approach is evaluated using several real hyperspectral data.
236

Um framework de testes unitários para procedimentos de carga em ambientes de business intelligence

Santos, Igor Peterson Oliveira 30 August 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Business Intelligence (BI) relies on Data Warehouse (DW), a historical data repository designed to support the decision making process. Despite the potential benefits of a DW, data quality issues prevent users from realizing the benefits of a BI environment and Data Analytics. Problems related to data quality can arise in any stage of the ETL (Extract, Transform and Load) process, especially in the loading phase. This thesis presents an approach to automate the selection and execution of previously identified test cases for loading procedures in BI environments and Data Analytics based on DW. To verify and validate the approach, a unit test framework was developed. The overall goal is achieve data quality improvement. The specific aim is reduce test effort and, consequently, promote test activities in DW process. The experimental evaluation was performed by two controlled experiments in the industry. The first one was carried out to investigate the adequacy of the proposed method for DW procedures development. The Second one was carried out to investigate the adequacy of the proposed method against a generic framework for DW procedures development. Both results showed that our approach clearly reduces test effort and coding errors during the testing phase in decision support environments. / A qualidade de um produto de software está diretamente relacionada com os testes empregados durante o seu desenvolvimento. Embora os processos de testes para softwares aplicativos e sistemas transacionais já apresentem um alto grau de maturidade, estes devem ser investigados para os processos de testes em um ambiente de Business Intelligence (BI) e Data Analytics. As diferenças deste ambiente em relação aos demais tipos de sistemas fazem com que os processos e ferramentas de testes existentes precisem ser ajustados a uma nova realidade. Neste contexto, grande parte das aplicações de Business Intelligence (BI) efetivas depende de um Data Warehouse (DW), um repositório histórico de dados projetado para dar suporte a processos de tomada de decisão. São as cargas de dados para o DW que merecem atenção especial relativa aos testes, por englobar procedimentos críticos em relação à qualidade. Este trabalho propõe uma abordagem de testes, baseada em um framework de testes unitários, para procedimentos de carga em um ambiente de BI e Data Analytics. O framework proposto, com base em metadados sobre as rotinas de carga, realiza a execução automática de casos de testes, por meio da geração de estados iniciais e a análise dos estados finais, bem como seleciona os casos de testes a serem aplicados. O objetivo é melhorar a qualidade dos procedimentos de carga de dados e reduzir o tempo empregado no processo de testes. A avaliação experimental foi realizada através de dois experimentos controlados executados na indústria. O primeiro avaliou a utilização de casos de testes para as rotinas de carga, comparando a efetividade do framework com uma abordagem manual. O segundo experimento efetuou uma comparação com um framework genérico e similar do mercado. Os resultados indicaram que o framework pode contribuir para o aumento da produtividade e redução dos erros de codificação durante a fase de testes em ambientes de suporte à decisão.
237

Um framework de testes unitários para procedimentos de carga em ambientes de business intelligence

Santos, Igor Peterson Oliveira 30 August 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Business Intelligence (BI) relies on Data Warehouse (DW), a historical data repository designed to support the decision making process. Despite the potential benefits of a DW, data quality issues prevent users from realizing the benefits of a BI environment and Data Analytics. Problems related to data quality can arise in any stage of the ETL (Extract, Transform and Load) process, especially in the loading phase. This thesis presents an approach to automate the selection and execution of previously identified test cases for loading procedures in BI environments and Data Analytics based on DW. To verify and validate the approach, a unit test framework was developed. The overall goal is achieve data quality improvement. The specific aim is reduce test effort and, consequently, promote test activities in DW process. The experimental evaluation was performed by two controlled experiments in the industry. The first one was carried out to investigate the adequacy of the proposed method for DW procedures development. The Second one was carried out to investigate the adequacy of the proposed method against a generic framework for DW procedures development. Both results showed that our approach clearly reduces test effort and coding errors during the testing phase in decision support environments. / A qualidade de um produto de software está diretamente relacionada com os testes empregados durante o seu desenvolvimento. Embora os processos de testes para softwares aplicativos e sistemas transacionais já apresentem um alto grau de maturidade, estes devem ser investigados para os processos de testes em um ambiente de Business Intelligence (BI) e Data Analytics. As diferenças deste ambiente em relação aos demais tipos de sistemas fazem com que os processos e ferramentas de testes existentes precisem ser ajustados a uma nova realidade. Neste contexto, grande parte das aplicações de Business Intelligence (BI) efetivas depende de um Data Warehouse (DW), um repositório histórico de dados projetado para dar suporte a processos de tomada de decisão. São as cargas de dados para o DW que merecem atenção especial relativa aos testes, por englobar procedimentos críticos em relação à qualidade. Este trabalho propõe uma abordagem de testes, baseada em um framework de testes unitários, para procedimentos de carga em um ambiente de BI e Data Analytics. O framework proposto, com base em metadados sobre as rotinas de carga, realiza a execução automática de casos de testes, por meio da geração de estados iniciais e a análise dos estados finais, bem como seleciona os casos de testes a serem aplicados. O objetivo é melhorar a qualidade dos procedimentos de carga de dados e reduzir o tempo empregado no processo de testes. A avaliação experimental foi realizada através de dois experimentos controlados executados na indústria. O primeiro avaliou a utilização de casos de testes para as rotinas de carga, comparando a efetividade do framework com uma abordagem manual. O segundo experimento efetuou uma comparação com um framework genérico e similar do mercado. Os resultados indicaram que o framework pode contribuir para o aumento da produtividade e redução dos erros de codificação durante a fase de testes em ambientes de suporte à decisão.
238

Big Data Analytics for Agriculture Input Supply Chain in Ethiopia : Supply Chain Management Professionals Perspective

Hassen, Abdurahman, Chen, Bowen January 2020 (has links)
In Ethiopia, agriculture accounts for 85% of the total employment, and the country’s export entirely relies on agricultural commodities. The country is continuously affected by chronic food shortage. In the last 40 years, the country’s population have almost tripled; and more agricultural productivity is required to support the livelihood of millions of citizens. As reported by various research, Ethiopia needs to address a number of policy and strategic priorities to improve agriculture; however, in-efficient agriculture supply chain for the supply of input is identified as one of the significant challenges to develop agricultural productivity in the country. The research problem that interest this thesis is to understand Big Data Analytics’ (BDA) potential in achieving better Agriculture Input Supply Chain in Ethiopia. Based on this, we conducted a basic qualitative study to understand the expectations of Supply Chain Management (SCM) professionals, the requirements for the potential applications of Big Data Analytics - and the implications of applying the same from the perspectives of SCM professionals in Ethiopia. The findings of the study suggest that BDA may bring operational and strategic benefit to agriculture input supply chain in Ethiopia, and the application of BDA may have positive implication to agricultural productivity and food security in the country. The findings of this study are not generalizable beyond the participants interviewed.
239

Sistema para el control y monitoreo de alteraciones hipertensivas en el embarazo / Wearable technology model to control and monitor hypertension during pregnancy

Balbin Lopez, Betsy Diamar, Reyes Coronado, Diego Antonio 31 January 2019 (has links)
En el Perú, según estudios realizados en el 2010, el 42% de los pacientes hipertensos son tratados, pero solo el 14% de los pacientes logran ser controlados. Esto se debe a que el proceso actual de control de la hipertensión no es completamente eficiente debido a que el paciente no se adhiere completamente al tratamiento y que los controles de la tensión arterial resultan ser muy puntuales tras periodos de tiempo largos de los cuales no se tiene información confiable relacionada con el progreso del paciente. Se plantea un sistema para el control y monitoreo de alteraciones hipertensivas en el embarazo con el uso de sensores biomédicos no invasivos. De esta manera aseguramos que la medición continua brinde la información precisa y confiable para que las mujeres gestantes puedan detectar a tiempo alguna alteración hipertensiva. Además, en segunda instancia, el sistema alerta a los familiares y al médico encargado sobre los niveles de presión arterial en caso de emergencia. El aporte del proyecto es reducir el aumento en la prevalencia de las enfermedades crónicas mediante la integración de los servicios de salud con la tecnología, y gestionar la información desde la colección de datos a través del wearable hasta la exposición. En base a las pruebas realizadas con pacientes gestantes, se obtiene que el 38.64% son controladas y monitoreadas el 75% del tiempo. Estos resultados indican que el uso de la tecnología puede influenciar positivamente en la reducción de la hipertensión en general o en enfermedades crónicas similares. / In Peru, according to studies conducted in 2010, 42% of hypertensive patients are treated, but only 14% of patients manage to be controlled. This is due to the fact that the current process of hypertension control is not completely efficient because the patient does not completely adhere to the treatment and that blood pressure controls turn out to be very punctual after long periods of time from which there is no reliable information related to the progress of the patient. A system is proposed for the control and monitoring of hypertensive disorders in pregnancy with the use of non-invasive biomedical sensors. In this way we ensure that continuous measurement provides accurate and reliable information so that pregnant women can detect any hypertensive disorder on time. In addition, the system alerts the family members and the doctor in charge about the blood pressure levels in case of emergency. The contribution of the project is to reduce the increase in the prevalence of chronic diseases by integrating health services with technology, and to manage information from data collection through wearable until data exposure. Based on the tests carried out with pregnant patients, 38.64% are controlled and monitored 75% of the time. These results indicate that the use of technology can positively influence the reduction of hypertension in general or in similar chronic diseases. / Tesis
240

Feedback-Driven Data Clustering

Hahmann, Martin 28 October 2013 (has links)
The acquisition of data and its analysis has become a common yet critical task in many areas of modern economy and research. Unfortunately, the ever-increasing scale of datasets has long outgrown the capacities and abilities humans can muster to extract information from them and gain new knowledge. For this reason, research areas like data mining and knowledge discovery steadily gain importance. The algorithms they provide for the extraction of knowledge are mandatory prerequisites that enable people to analyze large amounts of information. Among the approaches offered by these areas, clustering is one of the most fundamental. By finding groups of similar objects inside the data, it aims to identify meaningful structures that constitute new knowledge. Clustering results are also often used as input for other analysis techniques like classification or forecasting. As clustering extracts new and unknown knowledge, it obviously has no access to any form of ground truth. For this reason, clustering results have a hypothetical character and must be interpreted with respect to the application domain. This makes clustering very challenging and leads to an extensive and diverse landscape of available algorithms. Most of these are expert tools that are tailored to a single narrowly defined application scenario. Over the years, this specialization has become a major trend that arose to counter the inherent uncertainty of clustering by including as much domain specifics as possible into algorithms. While customized methods often improve result quality, they become more and more complicated to handle and lose versatility. This creates a dilemma especially for amateur users whose numbers are increasing as clustering is applied in more and more domains. While an abundance of tools is offered, guidance is severely lacking and users are left alone with critical tasks like algorithm selection, parameter configuration and the interpretation and adjustment of results. This thesis aims to solve this dilemma by structuring and integrating the necessary steps of clustering into a guided and feedback-driven process. In doing so, users are provided with a default modus operandi for the application of clustering. Two main components constitute the core of said process: the algorithm management and the visual-interactive interface. Algorithm management handles all aspects of actual clustering creation and the involved methods. It employs a modular approach for algorithm description that allows users to understand, design, and compare clustering techniques with the help of building blocks. In addition, algorithm management offers facilities for the integration of multiple clusterings of the same dataset into an improved solution. New approaches based on ensemble clustering not only allow the utilization of different clustering techniques, but also ease their application by acting as an abstraction layer that unifies individual parameters. Finally, this component provides a multi-level interface that structures all available control options and provides the docking points for user interaction. The visual-interactive interface supports users during result interpretation and adjustment. For this, the defining characteristics of a clustering are communicated via a hybrid visualization. In contrast to traditional data-driven visualizations that tend to become overloaded and unusable with increasing volume/dimensionality of data, this novel approach communicates the abstract aspects of cluster composition and relations between clusters. This aspect orientation allows the use of easy-to-understand visual components and makes the visualization immune to scale related effects of the underlying data. This visual communication is attuned to a compact and universally valid set of high-level feedback that allows the modification of clustering results. Instead of technical parameters that indirectly cause changes in the whole clustering by influencing its creation process, users can employ simple commands like merge or split to directly adjust clusters. The orchestrated cooperation of these two main components creates a modus operandi, in which clusterings are no longer created and disposed as a whole until a satisfying result is obtained. Instead, users apply the feedback-driven process to iteratively refine an initial solution. Performance and usability of the proposed approach were evaluated with a user study. Its results show that the feedback-driven process enabled amateur users to easily create satisfying clustering results even from different and not optimal starting situations.

Page generated in 0.0531 seconds