Spelling suggestions: "subject:"gig data"" "subject:"iig data""
241 |
Decision analysis in TurkeyGonul, M.S., Soyer, E., Onkal, Dilek 05 1900 (has links)
No
|
242 |
Big Data Analytics and Business Failures in Data-Rich Environments: An Organizing FrameworkAmankwah-Amoah, J., Adomako, Samuel 2018 December 1924 (has links)
Yes / In view of the burgeoning scholarly works on big data and big data analytical capabilities, there remains limited research on how different access to big data and different big data analytic capabilities possessed by firms can generate diverse conditions leading to business failure. To fill this gap in the existing literature, an integrated framework was developed that entailed two approaches to big data as an asset (i.e. threshold resource and distinctive resource) and two types of competences in big data analytics (i.e. threshold competence and distinctive/core competence). The analysis provides insights into how ordinary big data analytic capability and mere possession of big data are more likely to create conditions for business failure. The study extends the existing streams of research by shedding light on decisions and processes in facilitating or hampering firms’ ability to harness big data to mitigate the cause of business failures. The analysis led to the categorization of a number of fruitful avenues for research on data-driven approaches to business failure.
|
243 |
Influencing subjective well-being for business and sustainable development using big data and predictive regression analysisWeerakkody, Vishanth J.P., Sivarajah, Uthayasankar, Mahroof, Kamran, Maruyama, Takao, Lu, Shan 21 August 2020 (has links)
Yes / Business leaders and policymakers within service economies are placing greater emphasis on well-being, given the role of workers in such settings. Whilst people’s well-being can lead to economic growth, it can also have the opposite effect if overlooked. Therefore, enhancing subjective well-being (SWB) is pertinent for all organisations for the sustainable development of an economy. While health conditions were previously deemed the most reliable predictors, the availability of data on people’s personal lifestyles now offers a new dimension into well-being for organisations. Using open data available from the national Annual Population Survey in the UK, which measures SWB, this research uncovered that among several independent variables to predict varying levels of people's perceived well-being, long-term health conditions, one's marital status, and age played a key role in SWB. The proposed model provides the key indicators of measuring SWB for organisations using big data.
|
244 |
Ranking online consumer reviewsSaumya, S., Singh, J.P., Baabdullah, A.M., Rana, Nripendra P., Dwivedi, Y.K. 26 September 2020 (has links)
Yes / Product reviews are posted online by the hundreds and thousands for popular products. Handling such a large volume of continuously generated online content is a challenging task for buyers, sellers and researchers. The purpose of this study is to rank the overwhelming number of reviews using their predicted helpfulness scores. The helpfulness score is predicted using features extracted from review text, product description, and customer question-answer data of a product using the random-forest classifier and gradient boosting regressor. The system classifies reviews into low or high quality with the random-forest classifier. The helpfulness scores of the high-quality reviews are only predicted using the gradient boosting regressor. The helpfulness scores of the low-quality reviews are not calculated because they are never going to be in the top k reviews. They are just added at the end of the review list to the review-listing website. The proposed system provides fair review placement on review listing pages and makes all high-quality reviews visible to customers on the top. The experimental results on data from two popular Indian e-commerce websites validate our claim, as 3–4 newer high-quality reviews are placed in the top ten reviews along with 5–6 older reviews based on review helpfulness. Our findings indicate that inclusion of features from product description data and customer question-answer data improves the prediction accuracy of the helpfulness score. / Ministry of Electronics and Information Technology (MeitY), Government of India for financial support during research work through “Visvesvaraya PhD Scheme for Electronics and IT”.
|
245 |
An integrated artificial intelligence framework for knowledge creation and B2B marketing rational decision making for improving firm performanceBag, S., Gupta, S., Kumar, A., Sivarajah, Uthayasankar 23 December 2020 (has links)
Yes / This study examines the effect of big data powered artificial intelligence on customer knowledge creation, user knowledge creation and external market knowledge creation to better understand its impact on B2B marketing rational decision making to influence firm performance. The theoretical model is grounded in Knowledge Management Theory (KMT) and the primary data was collected from B2B companies functioning in the South African mining industry. Findings point out that big data powered artificial intelligence and the path customer knowledge creation is significant. Secondly, big data powered artificial intelligence and the path user knowledge creation is significant. Thirdly, big data powered artificial intelligence and the path external market knowledge creation is significant. It was observed that customer knowledge creation, user knowledge creation and external market knowledge creation have significant effect on the B2B marketing-rational decision making. Finally, the path B2B marketing rational decision making has a significant effect on firm performance.
|
246 |
Aggregated sensor payload submission model for token-based access control in the Web of ThingsAmir, Mohammad, Pillai, Prashant, Hu, Yim Fun 26 October 2015 (has links)
Yes / Web of Things (WoT) can be considered as a merger of newly emerging paradigms of Internet of Things (IoT) and cloud computing. Rapidly varying, highly volatile and heterogeneous data traffic is a characteristic of the WoT. Hence, the capture, processing, storage and exchange of huge volumes of data is a key requirement in this environment. The crucial resources in the WoT are the sensing devices and the sensing data. Consequently, access control mechanisms employed in this highly dynamic and demanding environment need to be enhanced so as to reduce the end-to-end latency for capturing and exchanging data pertaining to these underlying resources. While there are many previous studies comparing the advantages and disadvantages of access control mechanisms at the algorithm level, vary few of these provide any detailed comparison the performance of these access control mechanisms when used for different data handling procedures in the context of data capture, processing and storage. This study builds on previous work on token-based access control mechanisms and presents a comparison of two different approaches used for handling sensing devices and data in the WoT. It is shown that the aggregated data submission approach is around 700% more efficient than the serial payload submission procedure in reducing the round-trip response time.
|
247 |
Towards a framework for engineering big data: An automotive systems perspectiveByrne, Thomas J., Campean, Felician, Neagu, Daniel 05 1900 (has links)
Yes / Demand for more sophisticated models to meet big data expectations require significant data repository obligations, operating concurrently in higher-level applications. Current models provide only disjointed modelling paradigms. The proposed framework addresses the need for higher-level abstraction, using low-level logic in the form of axioms, from which higher-level functionality is logically derived. The framework facilitates definition and usage of subjective structures across the cyber-physical system domain, and is intended to converge the range of heterogeneous data-driven objects.
|
248 |
Big data in predictive toxicology / Big Data in Predictive ToxicologyNeagu, Daniel, Richarz, A-N. 15 January 2020 (has links)
No / The rate at which toxicological data is generated is continually becoming more rapid and the volume of data generated is growing dramatically. This is due in part to advances in software solutions and cheminformatics approaches which increase the availability of open data from chemical, biological and toxicological and high throughput screening resources. However, the amplified pace and capacity of data generation achieved by these novel techniques presents challenges for organising and analysing data output.
Big Data in Predictive Toxicology discusses these challenges as well as the opportunities of new techniques encountered in data science. It addresses the nature of toxicological big data, their storage, analysis and interpretation. It also details how these data can be applied in toxicity prediction, modelling and risk assessment.
|
249 |
Transformative role of big data through enabling capability recognition in constructionAtuahene, Bernard Tuffour, Kanjanabootra, S., Gajendran, T. 10 August 2023 (has links)
Yes / Big data application is a significant transformative driver of change in the retail, health, engineering, and advanced manufacturing sectors. Big data studies in construction are still somewhat limited, although there is increasing interest in what big data application could achieve. Through interviews with construction professionals, this paper identifies the capabilities needed in construction firms to enable the accrual of the potentially transformative benefits of big data application in construction. Based on previous studies, big data application capabilities, needed to transform construction processes, focussed on data, people, technology, and organisation. However, the findings of this research suggest a critical modification to that focus to include knowledge and the organisational environment along with people, data, and technology. The research findings show that construction firms use big data with a combination strategy to enable transformation by (a) driving an in-house data management policy to rolling-out the big data capabilities; (b) fostering collaborative capabilities with external firms for resource development, and (c) outsourcing big data services to address the capabilities deficits impacting digital transformation.
|
250 |
Traitement et raisonnement distribués des flux RDF / Distributed RDF stream processing and reasoningRen, Xiangnan 19 November 2018 (has links)
Le traitement en temps réel des flux de données émanant des capteurs est devenu une tâche courante dans de nombreux scénarios industriels. Dans le contexte de l'Internet des objets (IoT), les données sont émises par des sources de flux hétérogènes, c'est-à-dire provenant de domaines et de modèles de données différents. Cela impose aux applications de l'IoT de gérer efficacement l'intégration de données à partir de ressources diverses. Le traitement des flux RDF est dès lors devenu un domaine de recherche important. Cette démarche basée sur des technologies du Web Sémantique supporte actuellement de nombreuses applications innovantes où les notions de temps réel et de raisonnement sont prépondérantes. La recherche présentée dans ce manuscrit s'attaque à ce type d'application. En particulier, elle a pour objectif de gérer efficacement les flux de données massifs entrants et à avoir des services avancés d’analyse de données, e.g., la détection d’anomalie. Cependant, un moteur de RDF Stream Processing (RSP) moderne doit prendre en compte les caractéristiques de volume et de vitesse rencontrées à l'ère du Big Data. Dans un projet industriel d'envergure, nous avons découvert qu'un moteur de traitement de flux disponible 24/7 est généralement confronté à un volume de données massives, avec des changements dynamiques de la structure des données et les caractéristiques de la charge du système. Pour résoudre ces problèmes, nous proposons Strider, un moteur de traitement de flux RDF distribué, hybride et adaptatif qui optimise le plan de requête logique selon l’état des flux de données. Strider a été conçu pour garantir d'importantes propriétés industrielles telles que l'évolutivité, la haute disponibilité, la tolérance aux pannes, le haut débit et une latence acceptable. Ces garanties sont obtenues en concevant l'architecture du moteur avec des composants actuellement incontournables du Big Data: Apache Spark et Apache Kafka. De plus, un nombre croissant de traitements exécutés sur des moteurs RSP nécessitent des mécanismes de raisonnement. Ils se traduisent généralement par un compromis entre le débit de données, la latence et le coût computationnel des inférences. Par conséquent, nous avons étendu Strider pour prendre en charge la capacité de raisonnement en temps réel avec un support d'expressivité d'ontologies en RDFS + (i.e., RDFS + owl:sameAs). Nous combinons Strider avec une approche de réécriture de requêtes pour SPARQL qui bénéficie d'un encodage intelligent pour les bases de connaissances. Le système est évalué selon différentes dimensions et sur plusieurs jeux de données, pour mettre en évidence ses performances. Enfin, nous avons exploré le raisonnement du flux RDF dans un contexte d'ontologies exprimés avec un fragment d'ASP (Answer Set Programming). La considération de cette problématique de recherche est principalement motivée par le fait que de plus en plus d'applications de streaming nécessitent des tâches de raisonnement plus expressives et complexes. Le défi principal consiste à gérer les dimensions de débit et de latence avec des méthologies efficaces. Les efforts récents dans ce domaine ne considèrent pas l'aspect de passage à l'échelle du système pour le raisonnement des flux. Ainsi, nous visons à explorer la capacité des systèmes distribuées modernes à traiter des requêtes d'inférence hautement expressive sur des flux de données volumineux. Nous considérons les requêtes exprimées dans un fragment positif de LARS (un cadre logique temporel basé sur Answer Set Programming) et proposons des solutions pour traiter ces requêtes, basées sur les deux principaux modèles d’exécution adoptés par les principaux systèmes distribuées: Bulk Synchronous Parallel (BSP) et Record-at-A-Time (RAT). Nous mettons en œuvre notre solution nommée BigSR et effectuons une série d’évaluations. Nos expériences montrent que BigSR atteint un débit élevé au-delà du million de triplets par seconde en utilisant un petit groupe de machines / Real-time processing of data streams emanating from sensors is becoming a common task in industrial scenarios. In an Internet of Things (IoT) context, data are emitted from heterogeneous stream sources, i.e., coming from different domains and data models. This requires that IoT applications efficiently handle data integration mechanisms. The processing of RDF data streams hence became an important research field. This trend enables a wide range of innovative applications where the real-time and reasoning aspects are pervasive. The key implementation goal of such application consists in efficiently handling massive incoming data streams and supporting advanced data analytics services like anomaly detection. However, a modern RSP engine has to address volume and velocity characteristics encountered in the Big Data era. In an on-going industrial project, we found out that a 24/7 available stream processing engine usually faces massive data volume, dynamically changing data structure and workload characteristics. These facts impact the engine's performance and reliability. To address these issues, we propose Strider, a hybrid adaptive distributed RDF Stream Processing engine that optimizes logical query plan according to the state of data streams. Strider has been designed to guarantee important industrial properties such as scalability, high availability, fault-tolerant, high throughput and acceptable latency. These guarantees are obtained by designing the engine's architecture with state-of-the-art Apache components such as Spark and Kafka. Moreover, an increasing number of processing jobs executed over RSP engines are requiring reasoning mechanisms. It usually comes at the cost of finding a trade-off between data throughput, latency and the computational cost of expressive inferences. Therefore, we extend Strider to support real-time RDFS+ (i.e., RDFS + owl:sameAs) reasoning capability. We combine Strider with a query rewriting approach for SPARQL that benefits from an intelligent encoding of knowledge base. The system is evaluated along different dimensions and over multiple datasets to emphasize its performance. Finally, we have stepped further to exploratory RDF stream reasoning with a fragment of Answer Set Programming. This part of our research work is mainly motivated by the fact that more and more streaming applications require more expressive and complex reasoning tasks. The main challenge is to cope with the large volume and high-velocity dimensions in a scalable and inference-enabled manner. Recent efforts in this area still missing the aspect of system scalability for stream reasoning. Thus, we aim to explore the ability of modern distributed computing frameworks to process highly expressive knowledge inference queries over Big Data streams. To do so, we consider queries expressed as a positive fragment of LARS (a temporal logic framework based on Answer Set Programming) and propose solutions to process such queries, based on the two main execution models adopted by major parallel and distributed execution frameworks: Bulk Synchronous Parallel (BSP) and Record-at-A-Time (RAT). We implement our solution named BigSR and conduct a series of evaluations. Our experiments show that BigSR achieves high throughput beyond million-triples per second using a rather small cluster of machines
|
Page generated in 0.0522 seconds