• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1050
  • 420
  • 190
  • 111
  • 71
  • 63
  • 46
  • 40
  • 36
  • 20
  • 13
  • 13
  • 12
  • 8
  • 6
  • Tagged with
  • 2417
  • 1258
  • 370
  • 340
  • 252
  • 201
  • 198
  • 189
  • 186
  • 177
  • 177
  • 175
  • 173
  • 172
  • 172
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

An examination of privacy in the socio-technological context of Big Data and the socio-cultural context of China

Fu, Tao 01 August 2015 (has links)
Privacy has been an academic concern, ethical issue and legislative conundrum. No other factors have shaped the understanding of privacy as much as the development of technologies – be it the invention of press machines, telephones or cameras. With the diffusion of mobile Internet, social media, the Internet of Things and the penetration of devices such as smartphones, the global positioning system, surveillance cameras, sensors and radio frequency identification tags, Big Data, designed to economically extract value from a huge amount and variety of data, has been accumulating exponentially since 2012. Data-driven businesses collect, combine, use, share and analyze consumers’ personal information for business revenues. Consumers’ shopping habits, viewing habits, browsing history and many other online behaviors have been commodified. Never before in history had privacy been threatened by the latest communication technologies as it is today. This dissertation aims to study some of the rising issues of technology and businesses that relate to privacy in China, a rising economic power of the East. China is a country with Confucian heritage and governed under decades of Communist leadership. Its philosophical traditions and social fabric have shaped the perception of privacy since more than 2,000 years ago. “Private” was not taken as negative but being committed to the public or the greater good was an expected virtue in ancient China. The country also has a long tradition of peer surveillance whether it was under the baojia system or the later-on Urban and Rural Residents’ Committees. But after China adopted the reform and open-up policy in 1978, consumerism has inspired the new Chinese middle class to pursue more private space as a lifestyle. Alibaba, Baidu and Tencent are globally top-ranking Chinese Internet companies with huge numbers of users, tractions and revenues, whose businesses depend heavily on consumers’ personal data. As a response to the increase of consumer data and the potential intrusion of privacy by Internet and information service providers (IISPs), the Ministry of Industry and Information Technology, a regulator of China’s Internet industry, enacted laws to regulate the collection and use of personal information by the IISPs. Drawing upon the literature and privacy theories of Westin, Altman and Nissenbaum and the cultural theory of Hofstede, this study investigated the compliance of Chinese businesses’ privacy policies with relevant Chinese laws and the information provided in the privacy policies regarding the collection, use and disclosure of Internet users’ personal information; Chinese consumers’ privacy attitudes and actions, including the awareness, concerns, control, trust and trade-offs related to privacy; the differences among Chinese Fundamentalists, Pragmatists and Unconcerned using Core Privacy Orientation Index; and the conceptualization of privacy in present China. A triangulation of quantitative and qualitative methods such as case study, content analysis, online survey and semantic network analysis were employed to answer research questions and test hypotheses. This study found Chinese IISPs represented by Alibaba, Baidu and Tencent comply well with Chinese laws. Tencent provides the most information about the collection, use and disclosure of consumers’ personal information. Chinese consumers know little about Big Data technologies in terms of collecting their personal information. They have the most concerns about other individuals and the least about the government when their personal information is accessed without their knowledge. When their personal information is collected by online businesses, Chinese consumers’ have more concerns about their online chats, their images and emails and the fewer concerns about searches performed, websites browsed, shopping and viewing habits. Less than one-third of Chinese surveyed take pro-active measures to manage online privacy settings. Chinese consumers make more efforts to avoid being tracked by people who might criticize, harass, or target them; advertisers and hackers or criminals. They rarely make themselves invisible from government, law enforcement persons or people they are familiar with such as people from their past, family members and romantic partners. Chinese consumers are more trusting of the laws and regulations issued by the government than they are of online businesses to protect personal data. Chinese only trade privacy for benefits occasionally but when they see more benefits from privacy trade-offs, they have fewer concerns. To Chinese consumers, privacy means personal information, including but not limited to, family, home address, phone number, Chinese ID number, password to bank accounts and other online accounts, the leaking and disclosure of which without the owners’ consent to people whom they do not want the information to be known will result in a sense of insecurity.
52

Utmaningar i upphandlingsprocessen av Big datasystem : En kvalitativ studie om vilka utmaningar organisationer upplever vid upphandlingar av Big data-system

Eriksson, Marcus, Pujol Gibson, Ricard January 2017 (has links)
Organisationer har idag tillgång till stora mängder data som inte kan hanteras av traditionella Business Intelligence‐verktyg. Big data karakteriseras av stor volym, snabb hastighet och stor variation av data. För att hantera dessa karaktärer av data behöver organisationer upphandla ett Big data‐system för att ha möjlighet att utvinna värde. Många organisationer är medvetna om att investeringar i Big data kan bli lönsamma men vägen dit är otydlig. Studiens syfte är att undersöka vilka utmaningar organisationer står inför i samband med upphandling av ett Big data‐system och var i upphandlingsprocessen dessa utmaningar uppstår. Det empiriska materialet har samlats in från tre stora svenska företag och myndigheter som har upphandlat ett Big data‐ system. Analys av materialet har genomförts med Critical Incident Technique att identifiera utmaningar organisationer upplever i samband med upphandling av ett Big data‐system. I studiens resultat framgår det att organisationer upplever utmaningar med att förstå behovet av ett Big data‐system, skapa projektgruppen, välja projektmetod, skapa kravspecifikationen och hantera känslig och personlig data. / Organizations today have access to massive amounts of data that cannot be handled by traditional Business Intelligence tools. Big data is characterized by big volume, high velocity and variation. Organizations need to acquire a Big Data system, in order to handle the characteristics of the data and be able to generate business value. Today’s organizations are aware that investing in Big Data can be profitable but getting there is a challenge. The purpose of this study is to investigate the challenges the organizations may experience in the process of acquiring a Big Data system and when these challenges arise. The empirical data has been collected by interviewing three large Swedish companies and authorities which have acquired a Big Data system. The Critical Incident Technique has been used in order to identify the challenges which the organizations had experienced in the process of acquiring a Big Data system. The findings of the study shows that organizations experience challenges when they are understanding the need of the Big data‐system, creating the project team, choosing the project method, defining the requirements of the system and managing sensitive and personal data.
53

Identification of Patterns of Fatal Injuries in Humans through Big Data

Silva, Jesus, Romero, Ligia, Pineda, Omar Bonerge, Herazo-Beltran, Yaneth, Zilberman, Jack January 2020 (has links)
External cause injuries are defined as intentionally or unintentionally harm or injury to a person, which may be caused by trauma, poisoning, assault, accidents, etc., being fatal (fatal injury) or not leading to death (non-fatal injury). External injuries have been considered a global health problem for two decades. This work aims to determine criminal patterns using data mining techniques to a sample of patients from Mumbai city in India.
54

Regression analysis of big count data via a-optimal subsampling

Zhao, Xiaofeng 19 July 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / There are two computational bottlenecks for Big Data analysis: (1) the data is too large for a desktop to store, and (2) the computing task takes too long waiting time to finish. While the Divide-and-Conquer approach easily breaks the first bottleneck, the Subsampling approach simultaneously beat both of them. The uniform sampling and the nonuniform sampling--the Leverage Scores sampling-- are frequently used in the recent development of fast randomized algorithms. However, both approaches, as Peng and Tan (2018) have demonstrated, are not effective in extracting important information from data. In this thesis, we conduct regression analysis for big count data via A-optimal subsampling. We derive A-optimal sampling distributions by minimizing the trace of certain dispersion matrices in general estimating equations (GEE). We point out that the A-optimal distributions have the same running times as the full data M-estimator. To fast compute the distributions, we propose the A-optimal Scoring Algorithm, which is implementable by parallel computing and sequentially updatable for stream data, and has faster running time than that of the full data M-estimator. We present asymptotic normality for the estimates in GEE's and in generalized count regression. A data truncation method is introduced. We conduct extensive simulations to evaluate the numerical performance of the proposed sampling distributions. We apply the proposed A-optimal subsampling method to analyze two real count data sets, the Bike Sharing data and the Blog Feedback data. Our results in both simulations and real data sets indicated that the A-optimal distributions substantially outperformed the uniform distribution, and have faster running times than the full data M-estimators.
55

MACHINE LEARNING ALGORITHM PERFORMANCE OPTIMIZATION: SOLVING ISSUES OF BIG DATA ANALYSIS

Sohangir, Soroosh 01 December 2015 (has links) (PDF)
Because of high complexity of time and space, generating machine learning models for big data is difficult. This research is introducing a novel approach to optimize the performance of learning algorithms with a particular focus on big data manipulation. To implement this method a machine learning platform using eighteen machine learning algorithms is implemented. This platform is tested using four different use cases and result is illustrated and analyzed.
56

BEYOND THE (re) DECORATED SHED: EXPLORING ALTERNATIVE METHODS FOR BIG BOX REUSE

RUTLEDGE, KEVAN FOSTER January 2006 (has links)
No description available.
57

Development and Utilization of Big Bridge Data for Predicting Deck Condition Rating Using Machine Learning Algorithms

Fard, Fariba 05 1900 (has links)
Accurately predicting the deck condition rating of a bridge is crucial for effective maintenance and repair planning. Despite significant research efforts to develop deterioration models, a nationwide model has not been developed. This study aims to identify an appropriate machine learning (ML) algorithm that can accurately predict the deck condition ratings of the nation's bridges. To achieve this, the study collected big bridge data (BBD), which includes NBI, traffic, climate, and hazard data gathered using geospatial information science (GIS) and remote sensing techniques. Two sets of data were collected: a BBD for a single year of 2020 and a historical BBD covering a five-year period from 2016 to 2020. Three ML algorithms, including random forest, eXtreme Gradient Boosting (XGBoost), and Artificial Neural Network (ANN), were trained using 319,404 and 1,246,261 bridge decks in the BBD and the historical BBD, respectively. Results showed that the use of historical BBD significantly improved the performance of the models compared to BBD. Additionally, random forest and XGBoost, trained using the historical BBD, demonstrated higher overall accuracies and average F1 scores than the ANN model. Specifically, the random forest and XGBoost models achieved overall accuracies of 83.4% and 79.4%, respectively, and average F1 scores of 79.7% and 77.5%, respectively, while the ANN model achieved an overall accuracy of 58.8% and an average F1 score of 46.1%. The permutation-based variable importance revealed that the hazard data related to earthquakes did not significantly contribute to model development. In conclusion, tree-based ensemble learning algorithms, such as random forest and XGBoost, trained using updated historical bridge data, including NBI, traffic, and climate data, provide a useful tool for accurately predicting the deck condition ratings of bridges in the United States, allowing infrastructure managers to efficiently schedule inspections and allocate maintenance resources.
58

A strategic approach of value identification for a big data project

Lakoju, Mike January 2017 (has links)
The disruptive nature of innovations and technological advancements present potentially huge benefits, however, it is critical to take caution because they also come with challenges. This author holds fast to the school of thought which suggests that every organisation or society should properly evaluate innovations and their attendant challenges from a strategic perspective, before adopting them, or else could get blindsided by the after effects. Big Data is one of such innovations, currently trending within industry and academia. The instinctive nature of Organizations compels them to constantly find new ways to stay ahead of the competition. It is for this reason, that some incoherencies exist in the field of big data. While on the one hand, we have some Organizations rushing into implementing Big Data Projects, we also have in possibly equal measure, many other organisations that remain sceptical and uncertain of the benefits of "Big Data" in general and are also concerned with the implementation costs. What this has done is, create a huge focus on the area of Big Data Implementation. Literature reveals a good number of challenges around Big Data project implementations. For example, most Big Data projects are either abandoned or do not hit their expected target. Unfortunately, most IS literature has focused on implementation methodologies that are primarily focused on the data, resources, Big Data infrastructures, algorithms etc. Rather than leaving the incoherent space that exists to remain, this research seeks to collapse the space and open opportunities to harness and expand knowledge. Consequently, the research takes a slightly different standpoint by approaching Big Data implementation from a Strategic Perspective. The author emphasises the fact that focus should be shifted from going straight into implementing Big Data projects to first implementing a Big Data Strategy for the Organization. Before implementation, this strategy step will create the value proposition and identify deliverables to justify the project. To this end, the researcher combines an Alignment theory, with Digital Business Strategy theory to create a Big Data Strategy Framework that Organisations could use to align their business strategy with the Big Data project. The Framework was tested in two case studies, and the study resulted in the generation of the strategic Big Data Goals for both case studies. This Big Data Strategy framework aided the organisation in identifying the potential value that could be obtained from their Big Data project. These Strategic Big Data Goals can now be implemented in Big data Projects.
59

Možnosti využitia Big Data pre Competitive Inteligence / Possibilities of Big Data use for Competitive Intelligence

Verníček, Marek January 2016 (has links)
The main purpose of this thesis is to investigate the use of Big Data for the methods and procedures of Competitive Intelligence. Among the goals of the work is a toolkit for small and large businesses which is supposed to support their work with the whole process of Big Data work. Another goal is to design an effective solution of processing Big Data to gain a competitive advantage in business. The theoretical part of the work processes available scientific literature in the Czech Republic and abroad as well as describes the current state of Competitive Intelligence, and Big Data as one of its possible sources. Subsequently, the work deals with the characteristics of Big Data, the differences from working with common data, the need for a thorough preparation and Big Data applicability for the methods of Competitive Intelligence. The practical part is focused on analysis of Big Data tools available in the market with regard to the whole process from data collection to the analysis report preparation and integration of the entire solution into an automated state. The outcome of this part is the Big Data software toolkit for small and large businesses based on their budget. The final part of the work is devoted to the classification of the most promising business areas, which can benefit from the use of Big Data the most in order to gain competitive advantages and proposes the most effective solution of working with Big Data. Among other benefits of this work are expansion of the range of resources for Competitive Intelligence and in-depth analysis of possibilities of Big Data usage, designed to help professionals make use of this hitherto untapped potential to improve market position, gain new customers and strengthen the existing user base.
60

The dynamic management revolution of Big Data : A case study of Åhlen’s Big Data Analytics operation

Rystadius, Gustaf, Monell, David, Mautner, Linus January 2020 (has links)
Background: The implementation of Big Data Analytics (BDA) has drastically increased within several sectors such as retailing. Due to its rapidly altering environment, companies have to adapt and modify their business strategies and models accordingly. The concepts of ambidexterity and agility are said to act as mediators to these changes in relation to a company’s capabilities within BDA. Problem: Research within the respective fields of dynamic mediators and BDAC have been conducted, but the investigation of specific traits of these mediators, their interconnection and its impact on BDAC is scant. This actuality is seen as a surprise from scholars, calling for further empirical investigation.  Purpose: This paper sought to empirically investigate what specific traits of ambidexterity and agility that emerged within the case company of Åhlen’s BDA-operation, and how these traits are interconnected. It further studied how these traits and their interplay impacts the firm's talent and managerial BDAC. Method: A qualitative case study on the retail firm Åhlens was conducted with three participants central to the firm's BDA-operation. Semi-structured interviews were conducted with questions derived from the conceptual framework based upon reviewed literature and pilot interviews. The data was then analyzed and matched to literature using a thematic analysis approach.  Results: Five ambidextrous traits and three agile traits were found within Åhlen’s BDA-operation. Analysis of these traits showcased a clear positive impact on Åhlen’s BDAC, when properly interconnected. Further, it was found that in absence of such interplay, the dynamic mediators did not have as positive impact and occasionally even disruptive effects on the firm’s BDAC. Hence it was concluded that proper connection between the mediators had to be present in order to successfully impact and enhance the capabilities.

Page generated in 0.0296 seconds