• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 289
  • 24
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 474
  • 474
  • 116
  • 99
  • 98
  • 88
  • 67
  • 62
  • 61
  • 54
  • 48
  • 47
  • 46
  • 45
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

A robust & reliable Data-driven prognostics approach based on extreme learning machine and fuzzy clustering.

Javed, Kamran 09 April 2014 (has links) (PDF)
Le Pronostic et l'étude de l'état de santé (en anglais Prognostics and Health Management (PHM)) vise à étendre le cycle de vie d'un actif physique, tout en réduisant les coûts d'exploitation et de maintenance. Pour cette raison, le pronostic est considéré comme un processus clé avec des capacités de prédictions. En effet, des estimations précises de la durée de vie avant défaillance d'un équipement, Remaining Useful Life (RUL), permettent de mieux définir un plan d'actions visant à accroître la sécurité, réduire les temps d'arrêt, assurer l'achèvement de la mission et l'efficacité de la production. Des études récentes montrent que les approches guidées par les données sont de plus en plus appliquées pour le pronostic de défaillance. Elles peuvent être considérées comme des modèles de type " boite noire " pour l'étude du comportement du système directement à partir des données de surveillance d'état, pour définir l'état actuel du system et prédire la progression future de défauts. Cependant, l'approximation du comportement des machines critiques est une tâche difficile qui peut entraîner des mauvais pronostics. Pour la compréhension de la modélisation de pronostic guidé par les données, on considère les points suivants. 1) Comment traiter les données brutes de surveillance pour obtenir des caractéristiques appropriées reflétant l'évolution de la dégradation ? 2) Comment distinguer les états de dégradation et définir des critères de défaillance (qui peuvent varier d'un cas à un autre)? 3) Comment être sûr que les modèles définis seront assez robustes pour montrer une performance stable avec des entrées incertaines s'écartant des expériences acquises, et seront suffisamment fiables pour intégrer des données inconnues (c'est à dire les conditions de fonctionnement, les variations de l'ingénierie, etc.)? 4) Comment réaliser facilement une intégration sous des contraintes et des exigences industrielles? Ces questions sont des problèmes abordés dans cette thèse. Elles ont conduit à développer une nouvelle approche allant au-delà des limites des méthodes classiques de pronostic guidé par les données. Les principales contributions sont les suivantes. <br>- L'étape de traitement des données est améliorée par l'introduction d'une nouvelle approche d'extraction des caractéristiques à l'aide de fonctions trigonométriques et cumulatives qui sont basées sur trois caractéristiques : la monotonie, la "trendability" et la prévisibilité. L'idée principale de ce développement est de transformer les données brutes en indicateur qui améliorent la précision des prévisions à long terme. <br>- Pour tenir compte de la robustesse, la fiabilité et l'applicabilité, un nouvel algorithme de prédiction est proposé: Summation Wavelet-Extreme Learning Machine (SWELM). Le SW-ELM assure de bonnes performances de prédiction, tout en réduisant le temps d'apprentissage. Un ensemble de SW-ELM est également proposé pour quantifier l'incertitude et améliorer la précision des estimations. <br>- Les performances du pronostic sont également renforcées grâce à la proposition d'un nouvel algorithme d'évaluation de la santé: Subtractive-Maximum Entropy Fuzzy Clustering (S-MEFC). S-MEFC est une approche de classification non supervisée qui utilise l'inférence de l'entropie maximale pour représenter l'incertitude de données multidimensionnelles. Elle peut automatiquement déterminer le nombre d'états, sans intervention humaine. <br>- Le modèle de pronostic final est obtenu en intégrant le SW-ELM et le S-MEFC pour montrer l'évolution de la dégradation de la machine avec des prédictions simultanées et l'estimation d'états discrets. Ce programme permet également de définir dynamiquement les seuils de défaillance et d'estimer le RUL des machines surveillées. Les développements sont validés sur des données réelles à partir de trois plates-formes expérimentales: PRONOSTIA FEMTO-ST (banc d'essai des roulements), CNC SIMTech (Les fraises d'usinage), C-MAPSS NASA (turboréacteurs) et d'autres données de référence. En raison de la nature réaliste de la stratégie d'estimation du RUL proposée, des résultats très prometteurs sont atteints. Toutefois, la perspective principale de ce travail est d'améliorer la fiabilité du modèle de pronostic.
232

Environmental prediction and risk analysis using fuzzy numbers and data-driven models

Khan, Usman Taqdees 17 December 2015 (has links)
Dissolved oxygen (DO) is an important water quality parameter that is used to assess the health of aquatic ecosystems. Typically physically-based numerical models are used to predict DO, however, these models do not capture the complexity and uncertainty seen in highly urbanised riverine environments. To overcome these limitations, an alternative approach is proposed in this dissertation, that uses a combination of data-driven methods and fuzzy numbers to improve DO prediction in urban riverine environments. A major issue of implementing fuzzy numbers is that there is no consistent, transparent and objective method to construct fuzzy numbers from observations. A new method to construct fuzzy numbers is proposed which uses the relationship between probability and possibility theory. Numerical experiments are used to demonstrate that the typical linear membership functions used are inappropriate for environmental data. A new algorithm to estimate the membership function is developed, where a bin-size optimisation algorithm is paired with a numerical technique using the fuzzy extension principle. The developed method requires no assumptions of the underlying distribution, the selection of an arbitrary bin-size, and has the flexibility to create different shapes of fuzzy numbers. The impact of input data resolution and error value on membership function are analysed. Two new fuzzy data-driven methods: fuzzy linear regression and fuzzy neural network, are proposed to predict DO using real-time data. These methods use fuzzy inputs, fuzzy outputs and fuzzy model coefficients to characterise the total uncertainty. Existing methods cannot accommodate fuzzy numbers for each of these variables. The new method for fuzzy regression was compared against two existing fuzzy regression methods, Bayesian linear regression, and error-in-variables regression. The new method was better able to predict DO due to its ability to incorporate different sources of uncertainty in each component. A number of model assessment metrics were proposed to quantify fuzzy model performance. Fuzzy linear regression methods outperformed probability-based methods. Similar results were seen when the method was used for peak flow rate prediction. An existing fuzzy neural network model was refined by the use of possibility theory based calibration of network parameters, and the use of fuzzy rather than crisp inputs. A method to find the optimum network architecture was proposed to select the number of hidden neurons and the amount of data used for training, validation and testing. The performance of the updated fuzzy neural network was compared to the crisp results. The method demonstrated an improved ability to predict low DO compared to non-fuzzy techniques. The fuzzy data-driven methods using non-linear membership functions correctly identified the occurrence of extreme events. These predictions were used to quantify the risk using a new possibility-probability transformation. All combination of inputs that lead to a risk of low DO were identified to create a risk tool for water resource managers. Results from this research provide new tools to predict environmental factors in a highly complex and uncertain environment using fuzzy numbers. / Graduate / 0543 / 0775 / 0388
233

Projeto de controladores não lineares utilizando referência virtual

Neuhaus, Tassiano January 2012 (has links)
Este trabalho tem o intuito, de apresentar alguns conceitos, relativos à identifi cação de sistemás, tanto lineares quanto não linearep, além da ideia de referência virtual para, em conjunto com a teoria de projeto "de controladores baseados em dados, propor uma forrha de projeto de controladores não lineares baseados em identificação de sistemas. A utilização de referência virtual para a obtenção dos sinais necessários para a caracterização do controlador ótimo de um sistema e utilizado no método VRFT (Virtual Reference Feedback Tuning). Este método serve como base para o desenvolvimento da proposta deste trabalho que, em conjunto com a teoria de identificação de sistemas não lineares, permite a obteriçãci do controlador ótimo que leva o sistema a se comportar como especificado em malha fechada. Em especial optou-se pela caracterização do controlador utilizando estrutura de modelos racional, por esta ser uma classe bastante abrangente no que - diz respeito à quantidade de sistemas reais que ela é capaz de descrever. Fara demonstrar o potencial do método proposto para projeto de controladores, são apresentados ecemplos ilustrativos em situações onde o controlador ideal consegue ser representado pela classe de modelos, e quando isso não é possível. / This work aims to present some concepts related to linear and nonlinear system identification, as well as the •concept of virtual reference that, together with data based controller design's theory, provides design framework for nonlinear controllers. The Virtual Reference Feedback Tuning method (VRFT) is used as a basis for the current proposal, where we propose to unite nonlinear system identification algorithms and virtual reference to obtain the ideal controller: the one which makes the system behave as desired in closed loop. It was choosen to model the controller as a rational model due the wide variety of practical systems that can be represented by this model structure. For rational system identification we used an iterative algorithm which, based on the signal from input and output of the pIant, allows to identify the parameters of the pre defined controller structure with the signals obtained by virtual reference. To demonstrate the operation of the proposed identification controller methodology, illustrative examples are presented in situations where the ideal controller can be represented by the class of modeIs, and also when it is not possible.
234

O uso de fontes documentais no jornalismo guiado por dados

Gehrke, Marília January 2018 (has links)
Estudar as fontes utilizadas nas notícias de jornalismo guiado por dados (JGD) é a proposta desta dissertação. Para tanto, revisita as classificações de fontes trabalhadas por teóricos da área e situa o contexto atual, derivado de transformações sociais e tecnológicas, sob a perspectiva de sociedade em rede e do jornalismo em rede. O foco do estudo está em descobrir quais fontes são acionadas em notícias do JGD, que emerge neste cenário a partir dos anos 2000. Analisa um corpus constituído por 60 notícias veiculadas nos jornais O Globo, The New York Times e La Nación, como veículos tradicionais, e Nexo, FiveThirtyEight e Chequeado, como veículos nativos. A partir do cruzamento entre a teoria e o estudo empírico, propõe a classificação de tipos de fontes nas notícias de JGD. São eles: arquivo documental, estatística e reprodução. Por meio dessa classificação, busca preencher uma lacuna no quadro teórico sobre fontes, superficialmente discutido no jornalismo até então, trazendo o uso de documentos como protagonista neste cenário. / Studying the news sources used in data-driven journalism (DDJ) practices is the proposal of this dissertation. The theoretical approach includes classifications of news sources already discussed in journalism studies. Considering the contemporary context, which is modified by social and technological transformations, this study operates from the networked society and network journalism perspectives. The main point is to detect the use of journalism sources in news developed by DDJ techniques, which emerges in this scenario during the 2000’s. It analyzes 60 news records published by O Globo, The New York Times and La Nación, as traditional media, and Nexo, FiveThirtyEight and Chequeado, as the native ones. Combining the theory and the empirical study, it proposes a classification by types of sources of DDJ news: documentary file, statistics and reproduction. Through this classification, it aims to fulfill a gap found in the theoretical sources approach, which is superficially discussed in journalism until now, bringing the use of documents as a protagonist in this scenario.
235

Blended Professional Development: Toward a Data-Informed Model of Instruction

January 2017 (has links)
abstract: Data and the use of data to make educational decisions have attained new-found prominence in K-12 education following the inception of high-stakes testing and subsequent linking of teacher evaluations and teacher-performance pay to students' outcomes on standardized assessments. Although the research literature suggested students' academic performance benefits were derived from employing data-informed decision making (DIDM), many educators have not felt efficacious about implementing and using DIDM practices. Additionally, the literature suggested a five-factor model of teachers' efficacy and anxiety with respect to using DIDM practices: (a) identification of relevant information, (b) interpretation of relevant information, (c) application of interpretations of data to their classroom practices, (d) requisite technological skills, and (e) comfort with data and statistics. This action research study was designed to augment a program of support focused on DIDM, which was being offered at a K-8 charter school in Arizona. It sought to better understand the relation between participation in professional development (PD) modules and teachers' self-efficacy for using DIDM practices. It provided an online PD component, in which 19 kindergarten through 8th-grade teachers worked through three self-guided online learning modules, focused sequentially on (a) identification of relevant student data, (b) interpretation of relevant student data, and (c) application of interpretations of data to classroom practices. Each module concluded with an in-person reflection session, in which teachers shared artifacts they developed based on the modules, discussed challenges, shared solutions, and considered applications to their classrooms. Results of quantitative data from pre- and post-intervention assessments, suggested the intervention positively influenced participants' self-efficacy for (a) identifying and (b) interpreting relevant student data. Qualitative results from eight semi-structured interviews conducted at the conclusion of the intervention indicated that teachers, regardless of previous experience using data, viewed DIDM favorably and were more able to find and draw conclusions from their data than they were prior to the intervention. The quantitative and qualitative data exhibited complementarity pointing to the same conclusions. The discussion focused on explaining how the intervention influenced participants' self-efficacy for using DIDM practices, anxiety around using DIDM practices, and use of DIDM practices. / Dissertation/Thesis / Doctoral Dissertation Leadership and Innovation 2017
236

Leveraging Customer Information in New Service Development : An Exploratory Study Within the Telecom Industry

Beijer, Sebastian, Magnusson, Per January 2018 (has links)
There is an increasing pressure on service firms to innovate and compete on new offerings. As our lives become more digitized through the ubiquitous connectivity by the usage of digital devices, companies are now able to collect vast amount of various data in real-time, and thus, know radically more about their customers. Companies could leverage on this growing body of data and developing relevant services based on customer demands accordingly. One industry compelled to benefit by utilizing customer information is the telecom industry due to fierce competition and a need of innovation in a saturated market. Hence, the purpose of this study is to investigate how telecom companies use customer information in their development process of new services by answering the research question: How do telecom companies use customer information within their New Service Development process? To illuminate this, a qualitative research was conducted on three Swedish telecom companies. The findings indicate that telecom companies possess a beneficial position since they are able to collect a vast amount of data about their customers due to the digital nature of their services. However, they struggle to efficiently integrate the data and seamlessly disseminate the obtained knowledge internally. Hence, leveraging customer information in new service development has not reached its full potential and how well it is incorporated is determined by the skills of key employees and their collaboration rather than deployed internal processes.
237

Projeto de controladores não lineares utilizando referência virtual

Neuhaus, Tassiano January 2012 (has links)
Este trabalho tem o intuito, de apresentar alguns conceitos, relativos à identifi cação de sistemás, tanto lineares quanto não linearep, além da ideia de referência virtual para, em conjunto com a teoria de projeto "de controladores baseados em dados, propor uma forrha de projeto de controladores não lineares baseados em identificação de sistemas. A utilização de referência virtual para a obtenção dos sinais necessários para a caracterização do controlador ótimo de um sistema e utilizado no método VRFT (Virtual Reference Feedback Tuning). Este método serve como base para o desenvolvimento da proposta deste trabalho que, em conjunto com a teoria de identificação de sistemas não lineares, permite a obteriçãci do controlador ótimo que leva o sistema a se comportar como especificado em malha fechada. Em especial optou-se pela caracterização do controlador utilizando estrutura de modelos racional, por esta ser uma classe bastante abrangente no que - diz respeito à quantidade de sistemas reais que ela é capaz de descrever. Fara demonstrar o potencial do método proposto para projeto de controladores, são apresentados ecemplos ilustrativos em situações onde o controlador ideal consegue ser representado pela classe de modelos, e quando isso não é possível. / This work aims to present some concepts related to linear and nonlinear system identification, as well as the •concept of virtual reference that, together with data based controller design's theory, provides design framework for nonlinear controllers. The Virtual Reference Feedback Tuning method (VRFT) is used as a basis for the current proposal, where we propose to unite nonlinear system identification algorithms and virtual reference to obtain the ideal controller: the one which makes the system behave as desired in closed loop. It was choosen to model the controller as a rational model due the wide variety of practical systems that can be represented by this model structure. For rational system identification we used an iterative algorithm which, based on the signal from input and output of the pIant, allows to identify the parameters of the pre defined controller structure with the signals obtained by virtual reference. To demonstrate the operation of the proposed identification controller methodology, illustrative examples are presented in situations where the ideal controller can be represented by the class of modeIs, and also when it is not possible.
238

O uso de fontes documentais no jornalismo guiado por dados

Gehrke, Marília January 2018 (has links)
Estudar as fontes utilizadas nas notícias de jornalismo guiado por dados (JGD) é a proposta desta dissertação. Para tanto, revisita as classificações de fontes trabalhadas por teóricos da área e situa o contexto atual, derivado de transformações sociais e tecnológicas, sob a perspectiva de sociedade em rede e do jornalismo em rede. O foco do estudo está em descobrir quais fontes são acionadas em notícias do JGD, que emerge neste cenário a partir dos anos 2000. Analisa um corpus constituído por 60 notícias veiculadas nos jornais O Globo, The New York Times e La Nación, como veículos tradicionais, e Nexo, FiveThirtyEight e Chequeado, como veículos nativos. A partir do cruzamento entre a teoria e o estudo empírico, propõe a classificação de tipos de fontes nas notícias de JGD. São eles: arquivo documental, estatística e reprodução. Por meio dessa classificação, busca preencher uma lacuna no quadro teórico sobre fontes, superficialmente discutido no jornalismo até então, trazendo o uso de documentos como protagonista neste cenário. / Studying the news sources used in data-driven journalism (DDJ) practices is the proposal of this dissertation. The theoretical approach includes classifications of news sources already discussed in journalism studies. Considering the contemporary context, which is modified by social and technological transformations, this study operates from the networked society and network journalism perspectives. The main point is to detect the use of journalism sources in news developed by DDJ techniques, which emerges in this scenario during the 2000’s. It analyzes 60 news records published by O Globo, The New York Times and La Nación, as traditional media, and Nexo, FiveThirtyEight and Chequeado, as the native ones. Combining the theory and the empirical study, it proposes a classification by types of sources of DDJ news: documentary file, statistics and reproduction. Through this classification, it aims to fulfill a gap found in the theoretical sources approach, which is superficially discussed in journalism until now, bringing the use of documents as a protagonist in this scenario.
239

Accuracy of Software Reliability Prediction from Different Approaches

Vasudev, R.Sashin, Vanga, Ashok Reddy January 2008 (has links)
Many models have been proposed for software reliability prediction, but none of these models could capture a necessary amount of software characteristic. We have proposed a mixed approach using both analytical and data driven models for finding the accuracy in reliability prediction involving case study. This report includes qualitative research strategy. Data is collected from the case study conducted on three different companies. Based on the case study an analysis will be made on the approaches used by the companies and also by using some other data related to the organizations Software Quality Assurance (SQA) team. Out of the three organizations, the first two organizations used for the case study are working on reliability prediction and the third company is a growing company developing a product with less focus on quality. Data collection was by the means of interviewing an employee of the organization who leads a team and is in the managing position for at least last 2 years. / svra06@student.bth.se
240

Evaluating the use of ICN for Internet of things

Carlquist, Johan January 2018 (has links)
The market of IOT devices continues to grow at a rapid speed as well as constrained wireless sensor networks. Today, the main network paradigm is host centric where a users have to specify which host they want to receive their data from. Information-centric networking is a new paradigm for the future internet, which is based on named data instead of named hosts. With ICN, a user needs to send a request for a perticular data in order to retrieve it. When sent, any participant in the network, router or server, containing the data will respond to the request. In order to achieve low latency between data creation and its consumption, as well as being able to follow data which is sequentially produced at a fixed rate, an algortihm was developed. This algortihm calculates and determines when to send the next interest message towards the sensor. It uses a ‘one time subscription’ approach to send its interest message in advance of the creation of the data, thereby enabling a low latency from data creation to consumption. The result of this algorithm shows that a consumer can retrieve the data with minimum latency from its creation by the sensor over an extended period of time, without using a publish/subscribe system such as MQTT or similar which pushes their data towards their consumers. The performance evaluation carried out which analysed the Content Centric Network application on the sensor shows that the application has little impact on the overall round trip time in the network. Based on the results, this thesis concluded that the ICN paradigm, together with a ’one-time subscription’ model, can be a suitable option for communication within the IoT domain where consumers ask for sequentially produced data.

Page generated in 0.057 seconds