• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 628
  • 79
  • 64
  • 59
  • 35
  • 27
  • 25
  • 21
  • 10
  • 8
  • 8
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 1197
  • 547
  • 238
  • 218
  • 206
  • 191
  • 189
  • 173
  • 156
  • 153
  • 147
  • 142
  • 131
  • 128
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Disputing an Analytic Construct of Philosophical Conservatism

Evans, Daniel Carson 01 January 2012 (has links)
This paper examines and ultimately objects to a version of political Conservatism as described in Geoffrey Brennan and Alan Hamlin’s paper “Analytic Conservatism.” Brennan and Hamlin’s argument makes several claims about economic forecasting and societal risk-aversion that ultimately uphold the status quo within society. This paper examines these claims and refutes them, while also considering counter-arguments Brennan and Hamlin could use to defend their theory. In conclusion, this paper supports the analytic dimension of Brennan and Hamlin’s theory while criticizing the trivial and arbitrary nature of valuing the status quo.
182

Exploring Business Intelligence Commitment and Maturity in Small and Medium Sized Enterprises

Gudfinnsson, Kristens January 2011 (has links)
Implementing Business Intelligence solutions has fundamentally changed how many large organizations conduct their business. This is well understood in the scholarly literature but the adoption of BI within small or medium sized enterprises has, on the other hand, received little attention. Given the importance of small and medium sized enterprises (SMEs) in the economy, the scarcity of research in this area can be viewed as a problem. Thus, the aim of this work is to explore the BI-commitment in smaller-sized organizations and investigate how far they have proceeded in putting business analytics in action. In order to shed light on BI-implementation in the context of smaller organizations, in-depth interviews were conducted with representatives of four organizations within the Skaraborg district of Sweden. The initial objective of the research project was to explore several focal areas in order to establish the current state-of-practice. This provided the groundwork for further investigation on how SMEs approach BI. Further work involved the use of two theoretical frameworks to analyze organizational commitment and analytical maturity within the focal areas. The main findings in this work are that the organizational commitment to implement BI infrastructure is high among participating companies, but the use of analytics is nevertheless limited to few specific areas. The high ambition of managers to implement BI infrastructure can be the key to further develop the use of business analytics.  This work adds valuable insights for various stakeholders within the community and to others that want to have an idea of the current status of BI within SMEs in Sweden.
183

Taking Advantage of Business Intelligence in Complex-Systems Environment

Gudfinnsson, Kristens January 2012 (has links)
Business intelligence has fundamentally changed how many companies conduct their business. The focus of academic literature has however been on volume-operation companies that provide services to millions of customers. Complex-systems companies have fewer customers and pursue customer needs by providing more customized products and services. The knowledge in the research society is limited to volume-operation companies and there a need for more case studies regarding how companies actually use their information systems, specifically complex-systems companies. This work looks at a case of a complex-systems company with the overall aim to see how complex-systems companies may take advantage of business intelligence. A framework was used to measure BI maturity; expressed future needs were compared to future trends in BI literature and BI usage in specific areas was analyzed with the help of a framework from BI literature. The results indicate that the company is somewhere between Aspirational and Experienced with respect to BI maturity. BI is used for reporting a variety of key performance indicators and the main analysis tool for various calculations is Excel. The expressed future needs are mainly strategy-driven and technology-driven and often involve better and faster access to information. The difference in the business models of volume-operations companies and complex-systems companies could influence the BI maturity and help to explain the maturity difference between these two types of companies. Furthermore, the results indicate that there is a difference between the roles of BI tools in complex-systems companies compared to volume-operations.
184

Data analytics for networked and possibly private sources

Wang, Ting 05 April 2011 (has links)
This thesis focuses on two grand challenges facing data analytical system designers and operators nowadays. First, how to fuse information from multiple autonomous, yet correlated sources and to provide consistent views of underlying phenomena? Second, how to respect externally imposed constraints (privacy concerns in particular) without compromising the efficacy of analysis? To address the first challenge, we apply a general correlation network model to capture the relationships among data sources, and propose Network-Aware Analysis (NAA), a library of novel inference models, to capture (i) how the correlation of the underlying sources is reflected as the spatial and/or temporal relevance of the collected data, and (ii) how to track causality in the data caused by the dependency of the data sources. We have also developed a set of space-time efficient algorithms to address (i) how to correlate relevant data and (ii) how to forecast future data. To address the second challenge, we further extend the concept of correlation network to encode the semantic (possibly virtual) dependencies and constraints among entities in question (e.g., medical records). We show through a set of concrete cases that correlation networks convey significant utility for intended applications, and meanwhile are often used as the steppingstone by adversaries to perform inference attacks. Using correlation networks as the pivot for analyzing privacy-utility trade-offs, we propose Privacy-Aware Analysis (PAA), a general design paradigm of constructing analytical solutions with theoretical backing for both privacy and utility.
185

Health Analytics and Predictive Modeling: Four Essays on Health Informatics

Lin, Yu-Kai January 2015 (has links)
There is a marked trend of using information technologies to improve healthcare. Among all the health IT, electronic health record (EHR) systems hold great promises as they modernize the paradigm and practice of care provision. However, empirical studies in the literature found mixed evidence on whether EHRs improve quality of care. I posit two explanations for the mixed evidence. First, most prior studies failed to account for system use and only focused on EHR purchase or adoption. Second, most existing EHR systems provide inadequate clinical decision support and hence, fail to reveal the full potential of digital health. In this dissertation I address two broad research questions: a) Does meaningful use of EHRs improve quality of care? and b) How do we advance clinical decision making through innovative computational techniques of healthcare analytics? To these ends, the dissertation comprises four essays. The first essay examines whether meaningful use of EHRs improve quality of care through a natural experiment. I found that meaningful use significantly improve quality of care, and this effect is greater in historically disadvantaged hospitals such as small, non-teaching, or rural hospitals. These empirical findings present salient practical and policy implications about the role of health IT. On the other hand, in the other three essays I work with real-world EHR data sets and propose healthcare analytics frameworks and methods to better utilize clinical text (Essay II), integrate clinical guidelines and EHR data for risk prediction (Essay III), and develop a principled approach for multifaceted risk profiling (Essay IV). Models, frameworks, and design principles proposed in these essays advance not only health IT research, but also more broadly contribute to business analytics, design science, and predictive modeling research.
186

New Trends in Business Intelligence : A case study on the impact of organizational demands of information and new technologies on BI

Naghdipour, Navid Naghdipour January 2014 (has links)
When the data warehouse concept was first introduced by IBM as a part of their new information system in 1988, the first step in the field of modern decision support systems or business intelligence was taken. Since then, academics, practitioners and solution developers have put a considerable effort in introducing new trends of these systems. Each new trend has roots in what enterprises demand from these systems. The advances in Web technologies and social media, has led to introduction of new trends such as Cloud BI and Big Data which are both cost-effective and also have the potential to take advantage of semi-structured and unstructured data within organizations. This paper deals with these new trends and the influences of organizational demands and new technologies and tools on them. A deep literature review deals with four major BI trends in detail. (Data warehouse, Business Performance Management (BPM), Cloud BI and Big Data). Two case studies from local Business intelligence developers are carried out in order to explore the influences mentioned above. As the result of this study, a model is proposed that addresses the elements that affect the BI trends, both in organizational and technological perspectives. It is observed that despite the fact that lots of new trends have been introduced in the past years (e.g. Cloud BI and Big Data), it does not necessarily mean that older trends are becoming obsolete. Data warehouses and BPM systems are still being used vastly in the industry. However, the later trends can be offered to clients that have the demand for them. The results imply that Cloud BI is mainly suitable for companies with low initial budgets and Big Data can be adopted by organizations that want to exploit their social data sources. The mere fact that both implied trends are built upon their preceding ones, has transformed data warehouses and BPM approach the ground work for any new trends to come.
187

Scalable Embeddings for Kernel Clustering on MapReduce

Elgohary, Ahmed 14 February 2014 (has links)
There is an increasing demand from businesses and industries to make the best use of their data. Clustering is a powerful tool for discovering natural groupings in data. The k-means algorithm is the most commonly-used data clustering method, having gained popularity for its effectiveness on various data sets and ease of implementation on different computing architectures. It assumes, however, that data are available in an attribute-value format, and that each data instance can be represented as a vector in a feature space where the algorithm can be applied. These assumptions are impractical for real data, and they hinder the use of complex data structures in real-world clustering applications. The kernel k-means is an effective method for data clustering which extends the k-means algorithm to work on a similarity matrix over complex data structures. The kernel k-means algorithm is however computationally very complex as it requires the complete data matrix to be calculated and stored. Further, the kernelized nature of the kernel k-means algorithm hinders the parallelization of its computations on modern infrastructures for distributed computing. This thesis defines a family of kernel-based low-dimensional embeddings that allows for scaling kernel k-means on MapReduce via an efficient and unified parallelization strategy. Then, three practical methods for low-dimensional embedding that adhere to our definition of the embedding family are proposed. Combining the proposed parallelization strategy with any of the three embedding methods constitutes a complete scalable and efficient MapReduce algorithm for kernel k-means. The efficiency and the scalability of the presented algorithms are demonstrated analytically and empirically.
188

Multivariate Networks : Visualization and Interaction Techniques

Jusufi, Ilir January 2013 (has links)
As more and more data is created each day, researchers from different science domains are trying to make sense of it. A lot of this data, for example our connections to friends on different social networking websites, can be modeled as graphs, where the nodes are actors and the edges are relationships between them. Researchers analyze this data to find new forms of communication, to explore different social groups or subgroups, to detect illegal activities or to seek for different communication patterns that could help companies in their marketing campaigns. Another example are huge networks in system biology. Their visualization is crucial for the understanding of living beings. The topological structure of a network on its own could give insight into the existence or distribution of interesting actors in the network. However, this is often not enough to understand complex network systems in real-world applications. The reason for this is that all the network elements (nodes or edges) are not simple one-dimensional data. For instance in biology, experiments can be performed on biological networks. These experiments and network analysis approaches produce additional data that are often important to be analyzed with respect to the underlying network structure. Therefore, it is crucial to visualize the additional attributes of the network while preserving the network structure as much as possible. The problem is not trivial as these so-called multivariate networks could have a high number of attributes that are related to their nodes, edges, different groups, or clusters of nodes and/or edges. The aim of this thesis is to contribute to the development of different visualization and interaction techniques for the visual analysis of multivariate networks. Two research goals are defined in this thesis: first, a deeper understanding of existing approaches for visualizing multivariate networks should be acquired in order to classify them into categories and to identify disadvantages or unsolved visualization challenges. The second goal is to develop visualization and interaction techniques that will overcome various issues of these approaches. Initially, a brief survey on techniques to visualize multivariate networks is presented in this thesis. Afterwards, a small task-based user study investigating the usefulness of two main approaches for multivariate network visualization is discussed. Then, various visualization and interaction techniques for multivariate network visualization are presented. Three different software tools were implemented to demonstrate our research efforts. All features of our systems are highlighted, including a description of visualization and interaction techniques as well as disadvantages and scalability issues if present.
189

Low-cost Data Analytics for Shared Storage and Network Infrastructures

Mihailescu, Madalin 09 August 2013 (has links)
Data analytics used to depend on specialized, high-end software and hardware platforms. Recent years, however, have brought forth the data-flow programming model, i.e., MapReduce, and with it a flurry of sturdy, scalable open-source software solutions for analyzing data. In essence, the commoditization of software frameworks for data analytics is well underway. Yet, up to this point, data analytics frameworks are still regarded as standalone, em dedicated components; deploying these frameworks requires companies to purchase hardware to meet storage and network resource demands, and system administrators to handle management of data across multiple storage systems. This dissertation explores the low-cost integration of frameworks for data analytics within existing, shared infrastructures. The thesis centers on smart software being the key enabler for holistic commoditization of data analytics. We focus on two instances of smart software that aid in realizing the low-cost integration objective. For an efficient storage integration, we build MixApart, a scalable data analytics framework that removes the dependency on dedicated storage for analytics; with MixApart, a single, consolidated storage back-end manages data and services all types of workloads, thereby lowering hardware costs and simplifying data management. We evaluate MixApart at scale with micro-benchmarks and production workload traces, and show that MixApart provides faster or comparable performance to an analytics framework with dedicated storage. For an effective sharing of the networking infrastructure, we implement OX, a virtual machine management framework that allows latency-sensitive web applications to share the data center network with data analytics through intelligent VM placement; OX further protects all applications from hardware failures. The two solutions allow the reuse of existing storage and networking infrastructures when deploying analytics frameworks, and substantiate our thesis that smart software upgrades can enable the end-to-end commoditization of analytics.
190

Low-cost Data Analytics for Shared Storage and Network Infrastructures

Mihailescu, Madalin 09 August 2013 (has links)
Data analytics used to depend on specialized, high-end software and hardware platforms. Recent years, however, have brought forth the data-flow programming model, i.e., MapReduce, and with it a flurry of sturdy, scalable open-source software solutions for analyzing data. In essence, the commoditization of software frameworks for data analytics is well underway. Yet, up to this point, data analytics frameworks are still regarded as standalone, em dedicated components; deploying these frameworks requires companies to purchase hardware to meet storage and network resource demands, and system administrators to handle management of data across multiple storage systems. This dissertation explores the low-cost integration of frameworks for data analytics within existing, shared infrastructures. The thesis centers on smart software being the key enabler for holistic commoditization of data analytics. We focus on two instances of smart software that aid in realizing the low-cost integration objective. For an efficient storage integration, we build MixApart, a scalable data analytics framework that removes the dependency on dedicated storage for analytics; with MixApart, a single, consolidated storage back-end manages data and services all types of workloads, thereby lowering hardware costs and simplifying data management. We evaluate MixApart at scale with micro-benchmarks and production workload traces, and show that MixApart provides faster or comparable performance to an analytics framework with dedicated storage. For an effective sharing of the networking infrastructure, we implement OX, a virtual machine management framework that allows latency-sensitive web applications to share the data center network with data analytics through intelligent VM placement; OX further protects all applications from hardware failures. The two solutions allow the reuse of existing storage and networking infrastructures when deploying analytics frameworks, and substantiate our thesis that smart software upgrades can enable the end-to-end commoditization of analytics.

Page generated in 0.0324 seconds