• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 592
  • 119
  • 109
  • 75
  • 40
  • 40
  • 27
  • 22
  • 19
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1228
  • 1228
  • 181
  • 170
  • 163
  • 157
  • 151
  • 150
  • 150
  • 130
  • 113
  • 111
  • 111
  • 109
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Influencing subjective well-being for business and sustainable development using big data and predictive regression analysis

Weerakkody, Vishanth J.P., Sivarajah, Uthayasankar, Mahroof, Kamran, Maruyama, Takao, Lu, Shan 21 August 2020 (has links)
Yes / Business leaders and policymakers within service economies are placing greater emphasis on well-being, given the role of workers in such settings. Whilst people’s well-being can lead to economic growth, it can also have the opposite effect if overlooked. Therefore, enhancing subjective well-being (SWB) is pertinent for all organisations for the sustainable development of an economy. While health conditions were previously deemed the most reliable predictors, the availability of data on people’s personal lifestyles now offers a new dimension into well-being for organisations. Using open data available from the national Annual Population Survey in the UK, which measures SWB, this research uncovered that among several independent variables to predict varying levels of people's perceived well-being, long-term health conditions, one's marital status, and age played a key role in SWB. The proposed model provides the key indicators of measuring SWB for organisations using big data.
242

Ranking online consumer reviews

Saumya, S., Singh, J.P., Baabdullah, A.M., Rana, Nripendra P., Dwivedi, Y.K. 26 September 2020 (has links)
Yes / Product reviews are posted online by the hundreds and thousands for popular products. Handling such a large volume of continuously generated online content is a challenging task for buyers, sellers and researchers. The purpose of this study is to rank the overwhelming number of reviews using their predicted helpfulness scores. The helpfulness score is predicted using features extracted from review text, product description, and customer question-answer data of a product using the random-forest classifier and gradient boosting regressor. The system classifies reviews into low or high quality with the random-forest classifier. The helpfulness scores of the high-quality reviews are only predicted using the gradient boosting regressor. The helpfulness scores of the low-quality reviews are not calculated because they are never going to be in the top k reviews. They are just added at the end of the review list to the review-listing website. The proposed system provides fair review placement on review listing pages and makes all high-quality reviews visible to customers on the top. The experimental results on data from two popular Indian e-commerce websites validate our claim, as 3–4 newer high-quality reviews are placed in the top ten reviews along with 5–6 older reviews based on review helpfulness. Our findings indicate that inclusion of features from product description data and customer question-answer data improves the prediction accuracy of the helpfulness score. / Ministry of Electronics and Information Technology (MeitY), Government of India for financial support during research work through “Visvesvaraya PhD Scheme for Electronics and IT”.
243

An integrated artificial intelligence framework for knowledge creation and B2B marketing rational decision making for improving firm performance

Bag, S., Gupta, S., Kumar, A., Sivarajah, Uthayasankar 23 December 2020 (has links)
Yes / This study examines the effect of big data powered artificial intelligence on customer knowledge creation, user knowledge creation and external market knowledge creation to better understand its impact on B2B marketing rational decision making to influence firm performance. The theoretical model is grounded in Knowledge Management Theory (KMT) and the primary data was collected from B2B companies functioning in the South African mining industry. Findings point out that big data powered artificial intelligence and the path customer knowledge creation is significant. Secondly, big data powered artificial intelligence and the path user knowledge creation is significant. Thirdly, big data powered artificial intelligence and the path external market knowledge creation is significant. It was observed that customer knowledge creation, user knowledge creation and external market knowledge creation have significant effect on the B2B marketing-rational decision making. Finally, the path B2B marketing rational decision making has a significant effect on firm performance.
244

Aggregated sensor payload submission model for token-based access control in the Web of Things

Amir, Mohammad, Pillai, Prashant, Hu, Yim Fun 26 October 2015 (has links)
Yes / Web of Things (WoT) can be considered as a merger of newly emerging paradigms of Internet of Things (IoT) and cloud computing. Rapidly varying, highly volatile and heterogeneous data traffic is a characteristic of the WoT. Hence, the capture, processing, storage and exchange of huge volumes of data is a key requirement in this environment. The crucial resources in the WoT are the sensing devices and the sensing data. Consequently, access control mechanisms employed in this highly dynamic and demanding environment need to be enhanced so as to reduce the end-to-end latency for capturing and exchanging data pertaining to these underlying resources. While there are many previous studies comparing the advantages and disadvantages of access control mechanisms at the algorithm level, vary few of these provide any detailed comparison the performance of these access control mechanisms when used for different data handling procedures in the context of data capture, processing and storage. This study builds on previous work on token-based access control mechanisms and presents a comparison of two different approaches used for handling sensing devices and data in the WoT. It is shown that the aggregated data submission approach is around 700% more efficient than the serial payload submission procedure in reducing the round-trip response time.
245

Towards a framework for engineering big data: An automotive systems perspective

Byrne, Thomas J., Campean, Felician, Neagu, Daniel 05 1900 (has links)
Yes / Demand for more sophisticated models to meet big data expectations require significant data repository obligations, operating concurrently in higher-level applications. Current models provide only disjointed modelling paradigms. The proposed framework addresses the need for higher-level abstraction, using low-level logic in the form of axioms, from which higher-level functionality is logically derived. The framework facilitates definition and usage of subjective structures across the cyber-physical system domain, and is intended to converge the range of heterogeneous data-driven objects.
246

Big data in predictive toxicology / Big Data in Predictive Toxicology

Neagu, Daniel, Richarz, A-N. 15 January 2020 (has links)
No / The rate at which toxicological data is generated is continually becoming more rapid and the volume of data generated is growing dramatically. This is due in part to advances in software solutions and cheminformatics approaches which increase the availability of open data from chemical, biological and toxicological and high throughput screening resources. However, the amplified pace and capacity of data generation achieved by these novel techniques presents challenges for organising and analysing data output. Big Data in Predictive Toxicology discusses these challenges as well as the opportunities of new techniques encountered in data science. It addresses the nature of toxicological big data, their storage, analysis and interpretation. It also details how these data can be applied in toxicity prediction, modelling and risk assessment.
247

Transformative role of big data through enabling capability recognition in construction

Atuahene, Bernard Tuffour, Kanjanabootra, S., Gajendran, T. 10 August 2023 (has links)
Yes / Big data application is a significant transformative driver of change in the retail, health, engineering, and advanced manufacturing sectors. Big data studies in construction are still somewhat limited, although there is increasing interest in what big data application could achieve. Through interviews with construction professionals, this paper identifies the capabilities needed in construction firms to enable the accrual of the potentially transformative benefits of big data application in construction. Based on previous studies, big data application capabilities, needed to transform construction processes, focussed on data, people, technology, and organisation. However, the findings of this research suggest a critical modification to that focus to include knowledge and the organisational environment along with people, data, and technology. The research findings show that construction firms use big data with a combination strategy to enable transformation by (a) driving an in-house data management policy to rolling-out the big data capabilities; (b) fostering collaborative capabilities with external firms for resource development, and (c) outsourcing big data services to address the capabilities deficits impacting digital transformation.
248

Co-creating social licence for sharing health and care data

Fylan, F., Fylan, Beth 25 March 2021 (has links)
Yes / Optimising the use of patient data has the potential to produce a transformational change in healthcare planning, treatment, condition prevention and understanding disease progression. Establishing how people's trust could be secured and a social licence to share data could be achieved is of paramount importance. The study took place across Yorkshire and the Humber, in the North of the England, using a sequential mixed methods approach comprising focus groups, surveys and co-design groups. Twelve focus groups explored people's response to how their health and social care data is, could, and should be used. A survey examined who should be able to see health and care records, acceptable uses of anonymous health and care records, and trust in different organisations. Case study cards addressed willingness for data to be used for different purposes. Co-creation workshops produced a set of guidelines for how data should be used. Focus group participants (n = 80) supported sharing health and care data for direct care and were surprised that this is not already happening. They discussed concerns about the currency and accuracy of their records and possible stigma associated with certain diagnoses, such as mental health conditions. They were less supportive of social care access to their records. They discussed three main concerns about their data being used for research or service planning: being identified; security limitations; and the potential rationing of care on the basis of information in their record such as their lifestyle choices. Survey respondents (n = 1031) agreed that their GP (98 %) and hospital doctors and nurses (93 %) should be able to see their health and care records. There was more limited support for pharmacists (37 %), care staff (36 %), social workers (24 %) and researchers (24 %). Respondents thought their health and social care records should be used to help plan services (88 %), to help people stay healthy (67 %), to help find cures for diseases (67 %), for research for the public good (58 %), but only 16 % for commercial research. Co-creation groups developed a set of principles for a social licence for data sharing based around good governance, effective processes, the type of organisation, and the ability to opt in and out. People support their data being shared for a range of purposes and co-designed a set of principles that would secure their trust and consent to data sharing. / This work was supported by Humber Teaching NHS Foundation Trust and the National Institute for Health Research (NIHR) Yorkshire and Humber Patient Safety Translational Research Centre (NIHR Yorkshire and Humber PSTRC).
249

Traitement et raisonnement distribués des flux RDF / Distributed RDF stream processing and reasoning

Ren, Xiangnan 19 November 2018 (has links)
Le traitement en temps réel des flux de données émanant des capteurs est devenu une tâche courante dans de nombreux scénarios industriels. Dans le contexte de l'Internet des objets (IoT), les données sont émises par des sources de flux hétérogènes, c'est-à-dire provenant de domaines et de modèles de données différents. Cela impose aux applications de l'IoT de gérer efficacement l'intégration de données à partir de ressources diverses. Le traitement des flux RDF est dès lors devenu un domaine de recherche important. Cette démarche basée sur des technologies du Web Sémantique supporte actuellement de nombreuses applications innovantes où les notions de temps réel et de raisonnement sont prépondérantes. La recherche présentée dans ce manuscrit s'attaque à ce type d'application. En particulier, elle a pour objectif de gérer efficacement les flux de données massifs entrants et à avoir des services avancés d’analyse de données, e.g., la détection d’anomalie. Cependant, un moteur de RDF Stream Processing (RSP) moderne doit prendre en compte les caractéristiques de volume et de vitesse rencontrées à l'ère du Big Data. Dans un projet industriel d'envergure, nous avons découvert qu'un moteur de traitement de flux disponible 24/7 est généralement confronté à un volume de données massives, avec des changements dynamiques de la structure des données et les caractéristiques de la charge du système. Pour résoudre ces problèmes, nous proposons Strider, un moteur de traitement de flux RDF distribué, hybride et adaptatif qui optimise le plan de requête logique selon l’état des flux de données. Strider a été conçu pour garantir d'importantes propriétés industrielles telles que l'évolutivité, la haute disponibilité, la tolérance aux pannes, le haut débit et une latence acceptable. Ces garanties sont obtenues en concevant l'architecture du moteur avec des composants actuellement incontournables du Big Data: Apache Spark et Apache Kafka. De plus, un nombre croissant de traitements exécutés sur des moteurs RSP nécessitent des mécanismes de raisonnement. Ils se traduisent généralement par un compromis entre le débit de données, la latence et le coût computationnel des inférences. Par conséquent, nous avons étendu Strider pour prendre en charge la capacité de raisonnement en temps réel avec un support d'expressivité d'ontologies en RDFS + (i.e., RDFS + owl:sameAs). Nous combinons Strider avec une approche de réécriture de requêtes pour SPARQL qui bénéficie d'un encodage intelligent pour les bases de connaissances. Le système est évalué selon différentes dimensions et sur plusieurs jeux de données, pour mettre en évidence ses performances. Enfin, nous avons exploré le raisonnement du flux RDF dans un contexte d'ontologies exprimés avec un fragment d'ASP (Answer Set Programming). La considération de cette problématique de recherche est principalement motivée par le fait que de plus en plus d'applications de streaming nécessitent des tâches de raisonnement plus expressives et complexes. Le défi principal consiste à gérer les dimensions de débit et de latence avec des méthologies efficaces. Les efforts récents dans ce domaine ne considèrent pas l'aspect de passage à l'échelle du système pour le raisonnement des flux. Ainsi, nous visons à explorer la capacité des systèmes distribuées modernes à traiter des requêtes d'inférence hautement expressive sur des flux de données volumineux. Nous considérons les requêtes exprimées dans un fragment positif de LARS (un cadre logique temporel basé sur Answer Set Programming) et proposons des solutions pour traiter ces requêtes, basées sur les deux principaux modèles d’exécution adoptés par les principaux systèmes distribuées: Bulk Synchronous Parallel (BSP) et Record-at-A-Time (RAT). Nous mettons en œuvre notre solution nommée BigSR et effectuons une série d’évaluations. Nos expériences montrent que BigSR atteint un débit élevé au-delà du million de triplets par seconde en utilisant un petit groupe de machines / Real-time processing of data streams emanating from sensors is becoming a common task in industrial scenarios. In an Internet of Things (IoT) context, data are emitted from heterogeneous stream sources, i.e., coming from different domains and data models. This requires that IoT applications efficiently handle data integration mechanisms. The processing of RDF data streams hence became an important research field. This trend enables a wide range of innovative applications where the real-time and reasoning aspects are pervasive. The key implementation goal of such application consists in efficiently handling massive incoming data streams and supporting advanced data analytics services like anomaly detection. However, a modern RSP engine has to address volume and velocity characteristics encountered in the Big Data era. In an on-going industrial project, we found out that a 24/7 available stream processing engine usually faces massive data volume, dynamically changing data structure and workload characteristics. These facts impact the engine's performance and reliability. To address these issues, we propose Strider, a hybrid adaptive distributed RDF Stream Processing engine that optimizes logical query plan according to the state of data streams. Strider has been designed to guarantee important industrial properties such as scalability, high availability, fault-tolerant, high throughput and acceptable latency. These guarantees are obtained by designing the engine's architecture with state-of-the-art Apache components such as Spark and Kafka. Moreover, an increasing number of processing jobs executed over RSP engines are requiring reasoning mechanisms. It usually comes at the cost of finding a trade-off between data throughput, latency and the computational cost of expressive inferences. Therefore, we extend Strider to support real-time RDFS+ (i.e., RDFS + owl:sameAs) reasoning capability. We combine Strider with a query rewriting approach for SPARQL that benefits from an intelligent encoding of knowledge base. The system is evaluated along different dimensions and over multiple datasets to emphasize its performance. Finally, we have stepped further to exploratory RDF stream reasoning with a fragment of Answer Set Programming. This part of our research work is mainly motivated by the fact that more and more streaming applications require more expressive and complex reasoning tasks. The main challenge is to cope with the large volume and high-velocity dimensions in a scalable and inference-enabled manner. Recent efforts in this area still missing the aspect of system scalability for stream reasoning. Thus, we aim to explore the ability of modern distributed computing frameworks to process highly expressive knowledge inference queries over Big Data streams. To do so, we consider queries expressed as a positive fragment of LARS (a temporal logic framework based on Answer Set Programming) and propose solutions to process such queries, based on the two main execution models adopted by major parallel and distributed execution frameworks: Bulk Synchronous Parallel (BSP) and Record-at-A-Time (RAT). We implement our solution named BigSR and conduct a series of evaluations. Our experiments show that BigSR achieves high throughput beyond million-triples per second using a rather small cluster of machines
250

Critérios de seleção de sistemas de gerenciamento de banco de dados não relacionais em organizações privadas / Selection criteria of non-relational database management systems data in private organizations

Souza, Alexandre Morais de 31 October 2013 (has links)
Sistemas de Gerenciamento de Banco de Dados Não Relacionais (SGBDs NoSQL) são pacotes de software para gerenciamento de dados utilizando um modelo não relacional. Dado o atual contexto de crescimento na geração de dados e a necessidade que as organizações possuem em coletar grande quantidade de informações de clientes, pesquisas científicas, vendas e outras informações para análises futuras, é importante repensar a forma de se definir um SGBD adequado levando em consideração fatores econômicos, técnicos e estratégicos da organização. Esta é uma pesquisa relacionada com o estudo do novo modelo de gerenciamento de banco de dados, conhecido como NoSQL e traz como contribuição apresentar critérios de seleção para auxiliar consumidores de serviços de banco de dados, em organizações privadas, a selecionar um SGBD NoSQL. Para atender a este objetivo foi realizada revisão da literatura com levantamento bibliográfico sobre processo de seleção de software e de SGBDs, levantando critérios utilizados para este fim. Feito o levantamento bibliográfico, definiu-se o método de pesquisa como sendo a aplicação de um Painel Delphi, na modalidade ranking form. Por meio do painel foi possível determinar, após a realização de duas rodadas e participando um grupo de especialistas misto formado por gerentes, fornecedores de SGBD, acadêmicos, desenvolvedores e DBAs e DAs, os critérios mais relevantes para a escolha de um SGBD NoSQL, ordenados conforme pontuação obtida para cada critério. Os dados foram coletados por meio de questionário. A partir dos critérios identificados, foram feitas análises sobre os principais critérios de seleção de SGBDs NoSQL. Posteriormente, as conclusões e considerações finais contemplaram a análise dos resultados obtidos com o Painel Delphi. Como principal resultado alcançado, este estudo oferece uma visão realística acerca do modelo não relacional para gerenciamento de dados e apresenta os critérios mais importantes que indicam plausível a adoção de SGBDs NoSQL. / Database Management Systems Not Relational (NoSQL DBMSs) are software packages for data management using a non-relational model. Given the current context of growth in data generation and the need that organizations have to collect vast amount of customer information, scientific research, sales and other information for further analysis, it is important to rethink how to define a suitable DBMS considering economic, technical and strategic organization. This research is concerned with the study of the new management model database, known as NoSQL, and brings the present contribution selection criteria to assist service consumers Database, private organizations, to select a NoSQL DBMS. To satisfy this objective was reviewed the literature with bibliographic on software selection process and DBMSs, identifying criteria used for this purpose. After completion of the literature, was defined the search method with application of a Delphi panel, by the ranking form mode. Through the panel could be determined, after the completion of two rounds and attending a mixed group of experts formed by managers, DBMS vendors, academics, developers, DBAs and DAs, the most relevant criteria for choosing a NoSQL DBMS, ordered according score for each criteria. Data were collected through a survey. From the identified criteria, analyzes were made on the main selection criteria of NoSQL DBMSs. Subsequently, the conclusions and final considerations were made with analysis of the results obtained with the Delphi panel. The main result achieved, this study offers a realistic view about the non-relational model for managing data and presents the most important criteria that indicate plausible the adoption of NoSQL DBMSs.

Page generated in 0.0594 seconds