• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 299
  • 24
  • 21
  • 18
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 493
  • 493
  • 122
  • 106
  • 99
  • 88
  • 73
  • 67
  • 62
  • 56
  • 53
  • 47
  • 47
  • 46
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Therapy Decision Support Based on Recommender System Methods

Gräßer, Felix, Beckert, Stefanie, Küster, Denise, Schmitt, Jochen, Abraham, Susanne, Malberg, Hagen, Zaunseder, Sebastian 21 July 2017 (has links) (PDF)
We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender, are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system.
412

Smart Meters Big Data : Behavioral Analytics via Incremental Data Mining and Visualization

Singh, Shailendra January 2016 (has links)
The big data framework applied to smart meters offers an exception platform for data-driven forecasting and decision making to achieve sustainable energy efficiency. Buying-in consumer confidence through respecting occupants' energy consumption behavior and preferences towards improved participation in various energy programs is imperative but difficult to obtain. The key elements for understanding and predicting household energy consumption are activities occupants perform, appliances and the times that appliances are used, and inter-appliance dependencies. This information can be extracted from the context rich big data from smart meters, although this is challenging because: (1) it is not trivial to mine complex interdependencies between appliances from multiple concurrent data streams; (2) it is difficult to derive accurate relationships between interval based events, where multiple appliance usage persist; (3) continuous generation of the energy consumption data can trigger changes in appliance associations with time and appliances. To overcome these challenges, we propose an unsupervised progressive incremental data mining technique using frequent pattern mining (appliance-appliance associations) and cluster analysis (appliance-time associations) coupled with a Bayesian network based prediction model. The proposed technique addresses the need to analyze temporal energy consumption patterns at the appliance level, which directly reflect consumers' behaviors and provide a basis for generalizing household energy models. Extensive experiments were performed on the model with real-world datasets and strong associations were discovered. The accuracy of the proposed model for predicting multiple appliances usage outperformed support vector machine during every stage while attaining accuracy of 81.65\%, 85.90\%, 89.58\% for 25\%, 50\% and 75\% of the training dataset size respectively. Moreover, accuracy results of 81.89\%, 75.88\%, 79.23\%, 74.74\%, and 72.81\% were obtained for short-term (hours), and long-term (day, week, month, and season) energy consumption forecasts, respectively.
413

Remaining useful life estimation of critical components based on Bayesian Approaches. / Prédiction de l'état de santé des composants critiques à l'aide de l'approche Bayesienne

Mosallam, Ahmed 18 December 2014 (has links)
La construction de modèles de pronostic nécessite la compréhension du processus de dégradation des composants critiques surveillés afin d’estimer correctement leurs durées de fonctionnement avant défaillance. Un processus de d´dégradation peut être modélisé en utilisant des modèles de Connaissance issus des lois de la physique. Cependant, cette approche n´nécessite des compétences Pluridisciplinaires et des moyens expérimentaux importants pour la validation des modèles générés, ce qui n’est pas toujours facile à mettre en place en pratique. Une des alternatives consiste à apprendre le modèle de dégradation à partir de données issues de capteurs installés sur le système. On parle alors d’approche guidée par des données. Dans cette thèse, nous proposons une approche de pronostic guidée par des données. Elle vise à estimer à tout instant l’état de santé du composant physique et prédire sa durée de fonctionnement avant défaillance. Cette approche repose sur deux phases, une phase hors ligne et une phase en ligne. Dans la phase hors ligne, on cherche à sélectionner, parmi l’ensemble des signaux fournis par les capteurs, ceux qui contiennent le plus d’information sur la dégradation. Cela est réalisé en utilisant un algorithme de sélection non supervisé développé dans la thèse. Ensuite, les signaux sélectionnés sont utilisés pour construire différents indicateurs de santé représentant les différents historiques de données (un historique par composant). Dans la phase en ligne, l’approche développée permet d’estimer l’état de santé du composant test en faisant appel au filtre Bayésien discret. Elle permet également de calculer la durée de fonctionnement avant défaillance du composant en utilisant le classifieur k-plus proches voisins (k-NN) et le processus de Gauss pour la régression. La durée de fonctionnement avant défaillance est alors obtenue en comparant l’indicateur de santé courant aux indicateurs de santé appris hors ligne. L’approche développée à été vérifiée sur des données expérimentales issues de la plateforme PRO-NOSTIA sur les roulements ainsi que sur des données fournies par le Prognostic Center of Excellence de la NASA sur les batteries et les turboréacteurs. / Constructing prognostics models rely upon understanding the degradation process of the monitoredcritical components to correctly estimate the remaining useful life (RUL). Traditionally, a degradationprocess is represented in the form of physical or experts models. Such models require extensiveexperimentation and verification that are not always feasible in practice. Another approach that buildsup knowledge about the system degradation over time from component sensor data is known as datadriven. Data driven models require that sufficient historical data have been collected.In this work, a two phases data driven method for RUL prediction is presented. In the offline phase, theproposed method builds on finding variables that contain information about the degradation behaviorusing unsupervised variable selection method. Different health indicators (HI) are constructed fromthe selected variables, which represent the degradation as a function of time, and saved in the offlinedatabase as reference models. In the online phase, the method estimates the degradation state usingdiscrete Bayesian filter. The method finally finds the most similar offline health indicator, to the onlineone, using k-nearest neighbors (k-NN) classifier and Gaussian process regression (GPR) to use it asa RUL estimator. The method is verified using PRONOSTIA bearing as well as battery and turbofanengine degradation data acquired from NASA data repository. The results show the effectiveness ofthe method in predicting the RUL.
414

Sensibilisation allergénique au cours des huit premières années de vie, facteurs et morbidité associés dans la cohorte de naissances PARIS / Allergic sensitization over the first eight years of life, associated factors and morbidity in PARIS birth cohort

Gabet, Stephan 02 October 2017 (has links)
Contexte. Les premières années de vie apparaissent particulièrement propices au développement de la sensibilisation allergénique. Objectifs. Cette thèse vise à : i) décrire les profils de sensibilisation allergénique chez le nourrisson et l’enfant, ii) étudier l’association entre ces profils et la morbidité allergique et iii) identifier les facteurs de risque de cette sensibilisation. Méthodes. Dans le cadre du suivi de la cohorte prospective de naissances en population générale Pollution and Asthma Risk: an Infant Study (PARIS), la sensibilisation allergénique a été évaluée chez 1 860 nourrissons à 18 mois et 1 007 enfants à 8/9 ans par dosage des IgE spécifiques dirigées contre 16 et 19 allergènes, respectivement. Les informations concernant la santé et le cadre de vie des enfants ont été recueillies par questionnaires standardisés répétés. Des profils de sensibilisation et des profils de morbidité ont été identifiés par classification non supervisée et mis en relation par régression logistique multinomiale. Enfin, les facteurs associés à la sensibilisation allergénique chez le nourrisson ont été étudiés par régression logistique multivariée. Résultats. Dès 18 mois, 13,8% des enfants étaient sensibilisés et 6,2%, multi-sensibilisés. À 8/9 ans, ces prévalences étaient de 34,5% et 19,8%, respectivement. Les profils de sensibilisation identifiés chez le nourrisson (3) et dans l’enfance (5) différaient au regard de la morbidité allergique. L’analyse étiologique a permis de préciser le rôle des expositions précoces aux allergènes et aux microorganismes sur la sensibilisation allergénique. Conclusion. Cette thèse contribue à une meilleure compréhension de l’histoire naturelle de la sensibilisation allergénique, et ce, dès les premières années de vie. Cette connaissance est essentielle à la prévention des maladies allergiques qui en découlent. / Background. The first years of life appear to be critical for the development of allergic sensitization. Objectives. This thesis aims: i) to describe allergic sensitization profiles in infants and children, ii) to assess the link between these sensitization profiles and allergic morbidity, and iii) to identify risk factors for allergic sensitization. Methods. This work concerns children involved in the Pollution and Asthma Risk: an Infant Study (PARIS) population-based prospective birth cohort. Allergic sensitization was assessed in 1,860 18-month-old infants and 1,007 8/9-year-old children by specific IgE measurements towards 16 and 19 allergens, respectively. Lifelong health and living condition data were collected by repeated standardized questionnaires. Sensitization profiles and morbidity profiles were identified using unsupervised classification, and related to each other by multinomial logistic regression. Finally, risk factors for early allergic sensitization were assessed by multivariate logistic regression. Results. As soon as 18 months of age, 13.8% of children were sensitized and 6.2%, multi-sensitized. When 8/9 years old, corresponding prevalence was 34.5% and 19.8%, respectively. Sensitization profiles identified in infancy (3) and in childhood (5) differed in terms of allergic morbidity. Risk factor analysis allowed to clarify the role of early exposure to allergens and microorganisms on allergic sensitization. Conclusion. This thesis improves the natural history of allergic sensitization understanding, as soon as the first years of life. This knowledge is essential for subsequent disease preventing.
415

Therapy Decision Support Based on Recommender System Methods

Gräßer, Felix, Beckert, Stefanie, Küster, Denise, Schmitt, Jochen, Abraham, Susanne, Malberg, Hagen, Zaunseder, Sebastian 21 July 2017 (has links)
We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender, are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system.
416

Factores de adecuación de la cultura Data-driven en las organizaciones

Bendezú Mamani, Katherine Fiorella, Ccanto Valdivia, Ronal Esteven 21 January 2021 (has links)
La cultura data-driven consiste básicamente en tomar decisiones basadas en datos, siendo utilizada hoy más que nunca por diversas organizaciones que buscan mejorar sus procesos con el fin de atender mejor a sus clientes y consumidores. Sin embargo, muchas empresas se basan en acciones sin fundamentos y acaban perdiendo grandes oportunidades al no saber aprovechar al máximo el potencial existente en los datos. Por ello, es importante que para implementar una cultura data-driven se debe tomar en cuenta la integración de los datos desde la estrategia empresarial. A su vez, la cultura organizacional es un factor clave para el desarrollo de la cultura data-driven en las organizaciones. Este trabajo busca presentar una exhaustiva investigación de artículos de alto impacto y tiene como objetivo general contrastar las diversas posturas y valoración de los autores respecto a los factores de éxito para la adopción de una cultura data-driven en las organizaciones. / The culture based on data basically consists of making decisions based on data, being today more than ever by various organizations that seek to improve their processes in order to better serve their customers and consumers. However, many companies are based on unfounded actions and end up missing great opportunities by failing to take full advantage of the potential in data. Therefore, it is important that to implement a data-driven culture, the integration of data from the business strategy must be considered. In turn, organizational culture is a key factor for the development of data-driven culture in organizations. This work seeks to present an exhaustive investigation of high impact articles with the aim of contrasting the various positions and assessment of the authors regarding the success factors for the adoption of a data-based culture in organizations. / Trabajo de Suficiencia Profesional
417

Developing Artificial Intelligence-Based Decision Support for Resilient Socio-Technical Systems

Ali Lenjani (8921381) 15 June 2020 (has links)
<div>During 2017 and 2018, two of the costliest years on record regarding natural disasters, the U.S. experienced 30 events with total losses of $400 billion. These exuberant costs arise primarily from the lack of adequate planning spanning the breadth from pre-event preparedness to post-event response. It is imperative to start thinking about ways to make our built environment more resilient. However, empirically-calibrated and structure-specific vulnerability models, a critical input required to formulate decision-making problems, are not currently available. Here, the research objective is to improve the resilience of the built environment through an automated vision-based system that generates actionable information in the form of probabilistic pre-event prediction and post-event assessment of damage. The central hypothesis is that pre-event, e.g., street view images, along with the post-event image database, contain sufficient information to construct pre-event probabilistic vulnerability models for assets in the built environment. The rationale for this research stems from the fact that probabilistic damage prediction is the most critical input for formulating the decision-making problems under uncertainty targeting the mitigation, preparedness, response, and recovery efforts. The following tasks are completed towards the goal.</div><div>First, planning for one of the bottleneck processes of the post-event recovery is formulated as a decision making problem considering the consequences imposed on the community (module 1). Second, a technique is developed to automate the process of extracting multiple street-view images of a given built asset, thereby creating a dataset that illustrates its pre-event state (module 2). Third, a system is developed that automatically characterizes the pre-event state of the built asset and quantifies the probability that it is damaged by fusing information from deep neural network (DNN) classifiers acting on pre-event and post-event images (module 3). To complete the work, a methodology is developed to enable associating each asset of the built environment with a structural probabilistic vulnerability model by correlating the pre-event structure characterization to the post-event damage state (module 4). The method is demonstrated and validated using field data collected from recent hurricanes within the US.</div><div>The vision of this research is to enable the automatic extraction of information about exposure and risk to enable smarter and more resilient communities around the world.</div>
418

Evaluation of biomedical microwave sensors : Microwave sensors as muscle quality discriminators in laboratory and pilot clinical trial settings

Mattsson, Viktor January 2022 (has links)
In this thesis the primary focus is on the evaluation of biomedical microwave sensor to be used in the muscle analyzer system. Lower muscle quality is one indicator that a patient can have sarcopenia. Therefore the muscle analyzer system can be a tool used in screening for sarcopenia. Sarcopenia is a progressive skeletal muscle disorder that typically affects elderly people. It is characterized by several different things, one of them is that there is an infiltration of fat into the muscle. At microwave frequencies the dielectric properties of fat are vastly different than the muscles. So, this fat infiltration creates a dielectric contrast compared to muscle without this fat infiltration that the sensors aim to detect. The muscle analyzer system is proposed to be a portable device that can be employed in clinics to assess muscle quality. The sensors are evaluated on their ability to distinguish between normal muscle tissue and muscle of lower quality. This is achieved via electromagnetic simulations, clinical trials, where the system is compared against established techniques, and phantom experiments, where artificial tissue emulating materials is used in a laboratory setting to mimick the properties of human tissues. In a initial clinical pilot study the split ring resonator sensor was used, but the results raised concerns over the penetration depth of the sensor. Therefore, three new alternative sensors were designed and evaluated via simulations. Two of the new sensors showed encouraging results, one of which has been fabricated. This sensor was used in a another clinical study.This study only had data from 4 patients, 8 measurements in total, meaning it was hard to draw any conclusions from it. The sensors used in the clinical setting as well as another were evaluated in the phantom experiments. Those experiments were exploratory because a wider frequency range was used, although some problems in the experiments were found. A secondary approach in this thesis is devoted to a data-driven approach, where a microwave sensor is simulated. The data from it is simulated and used to train a neural network to predict the dielectric properties of materials. The network predicts these properties with relatively high accuracy. However, this approach is currently limited to simulations only. Several ideas on how to improve this approach and extend it to measurements is given.
419

3D Building Model Reconstruction from Very High Resolution Satellite Stereo Imagery

Partovi, Tahmineh 02 October 2019 (has links)
Automatic three-dimensional (3D) building model reconstruction using remote sensing data is crucial in applications which require large-scale and frequent building model updates, such as disaster monitoring and urban management, to avoid huge manual efforts and costs. Recent advances in the availability of very high-resolution satellite data together with efficient data acquisition and large area coverage have led to an upward trend in their applications for 3D building model reconstructions. In this dissertation, a novel multistage hybrid automatic 3D building model reconstruction approach is proposed which reconstructs building models in level of details 2 (LOD2) based on digital surface model (DSM) data generated from the very high-resolution stereo imagery of the WorldView-2 satellite. This approach uses DSM data in combination with orthorectified panchromatic (PAN) and pan-sharpened data of multispectral satellite imagery to overcome the drawbacks of DSM data, such as blurred building boundaries, rough building shapes unwanted failures in the roof geometries. In the first stage, the rough building boundaries in the DSM-based building masks are refined by classifying the geometrical features of the corresponding PAN images. The refined boundaries are then simplified in the second stage through a parameterization procedure which represents the boundaries by a set of line segments. The main orientations of buildings are then determined, and the line segments are regularized accordingly. The regularized line segments are then connected to each other based on a rule-based method to form polygonal building boundaries. In the third stage, a novel technique is proposed to decompose the building polygons into a number of rectangles under the assumption that buildings are usually composed of rectangular structures. In the fourth stage, a roof model library is defined, which includes flat, gable, half-hip, hip, pyramid and mansard roofs. These primitive roof types are then assigned to the rectangles based on a deep learning-based classification method. In the fifth stage, a novel approach is developed to reconstruct watertight parameterized 3D building models based on the results of the previous stages and normalized DSM (nDSM) of satellite imagery. In the final stage, a novel approach is proposed to optimize building parameters based on an exhaustive search, so that the two-dimensional (2D) distance between the 3D building models and the building boundaries (obtained from building masks and PAN image) as well as the 3D normal distance between the 3D building models and the 3D point clouds (obtained from nDSM) are minimized. Different parts of the building blocks are then merged through a newly proposed intersection and merging process. All corresponding experiments were conducted on four areas of the city of Munich including 208 buildings and the results were evaluated qualitatively and quantitatively. According to the results, the proposed approach could accurately reconstruct 3D models of buildings, even the complex ones with several inner yards and multiple orientations. Furthermore, the proposed approach provided a high level of automation by the limited number of primitive roof model types required and by performing automatic parameter initialization. In addition, the proposed boundary refinement method improved the DSM-based building masks specified by 8 % in area accuracy. Furthermore, the ridge line directions and roof types were detected accurately for most of the buildings. The combination of the first three stages improved the accuracy of the building boundaries by 70 % in comparison to using line segments extracted from building masks without refinement. Moreover, the proposed optimization approach could achieve in most cases the best combinations of 2D and 3D geometrical parameters of roof models. Finally, the intersection and merging process could successfully merge different parts of the complex building models.
420

Data-driven modeling and simulation of spatiotemporal processes with a view toward applications in biology

Maddu Kondaiah, Suryanarayana 11 January 2022 (has links)
Mathematical modeling and simulation has emerged as a fundamental means to understand physical process around us with countless real-world applications in applied science and engineering problems. However, heavy reliance on first principles, symmetry relations, and conservation laws has limited its applicability to a few scientific domains and even few real-world scenarios. Especially in disciplines like biology the underlying living constituents exhibit a myriad of complexities like non-linearities, non-equilibrium physics, self-organization and plasticity that routinely escape mathematical treatment based on governing laws. Meanwhile, recent decades have witnessed rapid advancement in computing hardware, sensing technologies, and algorithmic innovations in machine learning. This progress has helped propel data-driven paradigms to achieve unprecedented practical success in the fields of image processing and computer vision, natural language processing, autonomous transport, and etc. In the current thesis, we explore, apply, and advance statistical and machine learning strategies that help bridge the gap between data and mathematical models, with a view toward modeling and simulation of spatiotemporal processes in biology. As first, we address the problem of learning interpretable mathematical models of biologial process from limited and noisy data. For this, we propose a statistical learning framework called PDE-STRIDE based on the theory of stability selection and ℓ0-based sparse regularization for parsimonious model selection. The PDE-STRIDE framework enables model learning with relaxed dependencies on tuning parameters, sample-size and noise-levels. We demonstrate the practical applicability of our method on real-world data by considering a purely data-driven re-evaluation of the advective triggering hypothesis explaining the embryonic patterning event in the C. elegans zygote. As a next natural step, we extend our PDE-STRIDE framework to leverage prior knowledge from physical principles to learn biologically plausible and physically consistent models rather than models that simply fit the data best. For this, we modify the PDE-STRIDE framework to handle structured sparsity constraints for grouping features which enables us to: 1) enforce conservation laws, 2) extract spatially varying non-observables, 3) encode symmetry relations associated with the underlying biological process. We show several applications from systems biology demonstrating the claim that enforcing priors dramatically enhances the robustness and consistency of the data-driven approaches. In the following part, we apply our statistical learning framework for learning mean-field deterministic equations of active matter systems directly from stochastic self-propelled active particle simulations. We investigate two examples of particle models which differs in the microscopic interaction rules being used. First, we consider a self-propelled particle model endowed with density-dependent motility character. For the chosen hydrodynamic variables, our data-driven framework learns continuum partial differential equations that are in excellent agreement with analytical derived coarse-grain equations from Boltzmann approach. In addition, our structured sparsity framework is able to decode the hidden dependency between particle speed and the local density intrinsic to the self-propelled particle model. As a second example, the learning framework is applied for coarse-graining a popular stochastic particle model employed for studying the collective cell motion in epithelial sheets. The PDE-STRIDE framework is able to infer novel PDE model that quantitatively captures the flow statistics of the particle model in the regime of low density fluctuations. Modern microscopy techniques produce GigaBytes (GB) and TeraBytes (TB) of data while imaging spatiotemporal developmental dynamics of living organisms. However, classical statistical learning based on penalized linear regression models struggle with issues like accurate computation of derivatives in the candidate library and problems with computational scalability for application to “big” and noisy data-sets. For this reason we exploit the rich parameterization of neural networks that can efficiently learn from large data-sets. Specifically, we explore the framework of Physics-Informed Neural Networks (PINN) that allow for seamless integration of physics priors with measurement data. We propose novel strategies for multi-objective optimization that allow for adapting PINN architecture to multi-scale modeling problems arising in biology. We showcase application examples for both forward and inverse modeling of mesoscale active turbulence phenomenon observed in dense bacterial suspensions. Employing our strategies, we demonstrate orders of magnitude gain in accuracy and convergence in comparison with conventional formulation for solving multi-objective optimization in PINNs. In the concluding chapter of the thesis, we skip model interpretability and focus on learning computable models directly from noisy data for the purpose of pure dynamics forecasting. We propose STENCIL-NET, an artificial neural network architecture that learns solution adaptive spatial discretization of an unknown PDE model that can be stably integrated in time with negligible loss in accuracy. To support this claim, we present numerical experiments on long-term forecasting of chaotic PDE solutions on coarse spatio-temporal grids, and also showcase de-noising application that help decompose spatiotemporal dynamics from the noise in an equation-free manner.

Page generated in 0.0347 seconds