• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 855
  • 412
  • 156
  • 83
  • 79
  • 35
  • 26
  • 16
  • 16
  • 14
  • 13
  • 10
  • 9
  • 8
  • 8
  • Tagged with
  • 2066
  • 2066
  • 546
  • 431
  • 430
  • 382
  • 380
  • 202
  • 188
  • 164
  • 162
  • 155
  • 147
  • 147
  • 144
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Information visualisation and data analysis using web mash-up systems

Khan, Wajid January 2014 (has links)
The arrival of E-commerce systems have contributed greatly to the economy and have played a vital role in collecting a huge amount of transactional data. It is becoming difficult day by day to analyse business and consumer behaviour with the production of such a colossal volume of data. Enterprise 2.0 has the ability to store and create an enormous amount of transactional data; the purpose for which data was collected could quite easily be disassociated as the essential information goes unnoticed in large and complex data sets. The information overflow is a major contributor to the dilemma. In the current environment, where hardware systems have the ability to store such large volumes of data and the software systems have the capability of substantial data production, data exploration problems are on the rise. The problem is not with the production or storage of data but with the effectiveness of the systems and techniques where essential information could be retrieved from complex data sets in a comprehensive and logical approach as the data questions are asked. Using the existing information retrieval systems and visualisation tools, the more specific questions are asked, the more definitive and unambiguous are the visualised results that could be attained, but when it comes to complex and large data sets there are no elementary or simple questions. Therefore a profound information visualisation model and system is required to analyse complex data sets through data analysis and information visualisation, to make it possible for the decision makers to identify the expected and discover the unexpected. In order to address complex data problems, a comprehensive and robust visualisation model and system is introduced. The visualisation model consists of four major layers, (i) acquisition and data analysis, (ii) data representation, (iii) user and computer interaction and (iv) results repositories. There are major contributions in all four layers but particularly in data acquisition and data representation. Multiple attribute and dimensional data visualisation techniques are identified in Enterprise 2.0 and Web 2.0 environment. Transactional tagging and linked data are unearthed which is a novel contribution in information visualisation. The visualisation model and system is first realised as a tangible software system, which is then validated through different and large types of data sets in three experiments. The first experiment is based on the large Royal Mail postcode data set. The second experiment is based on a large transactional data set in an enterprise environment while the same data set is processed in a non-enterprise environment. The system interaction facilitated through new mashup techniques enables users to interact more fluently with data and the representation layer. The results are exported into various reusable formats and retrieved for further comparison and analysis purposes. The information visualisation model introduced in this research is a compact process for any size and type of data set which is a major contribution in information visualisation and data analysis. Advanced data representation techniques are employed using various web mashup technologies. New visualisation techniques have emerged from the research such as transactional tagging visualisation and linked data visualisation. The information visualisation model and system is extremely useful in addressing complex data problems with strategies that are easy to interact with and integrate.
302

ASPCAP: THE APOGEE STELLAR PARAMETER AND CHEMICAL ABUNDANCES PIPELINE

García Pérez, Ana E., Prieto, Carlos Allende, Holtzman, Jon A., Shetrone, Matthew, Mészáros, Szabolcs, Bizyaev, Dmitry, Carrera, Ricardo, Cunha, Katia, García-Hernández, D. A., Johnson, Jennifer A., Majewski, Steven R., Nidever, David L., Schiavon, Ricardo P., Shane, Neville, Smith, Verne V., Sobeck, Jennifer, Troup, Nicholas, Zamora, Olga, Weinberg, David H., Bovy, Jo, Eisenstein, Daniel J., Feuillet, Diane, Frinchaboy, Peter M., Hayden, Michael R., Hearty, Fred R., Nguyen, Duy C., O’Connell, Robert W., Pinsonneault, Marc H., Wilson, John C., Zasowski, Gail 23 May 2016 (has links)
The Apache Point Observatory Galactic Evolution Experiment (APOGEE) has built the largest moderately high-resolution (R approximate to 22,500) spectroscopic map of the stars across the Milky Way, and including dust-obscured areas. The APOGEE Stellar Parameter and Chemical Abundances Pipeline (ASPCAP) is the software developed for the automated analysis of these spectra. ASPCAP determines atmospheric parameters and chemical abundances from observed spectra by comparing observed spectra to libraries of theoretical spectra, using. 2 minimization in a multidimensional parameter space. The package consists of a FORTRAN90 code that does the actual minimization and a wrapper IDL code for book-keeping and data handling. This paper explains in detail the ASPCAP components and functionality, and presents results from a number of tests designed to check its performance. ASPCAP provides stellar effective temperatures, surface gravities, and metallicities precise to 2%, 0.1 dex, and 0.05 dex, respectively, for most APOGEE stars, which are predominantly giants. It also provides abundances for up to 15 chemical elements with various levels of precision, typically under 0.1 dex. The final data release (DR12) of the Sloan Digital Sky Survey III contains an APOGEE database of more than 150,000 stars. ASPCAP development continues in the SDSS-IV APOGEE-2 survey.
303

Implementing a Class of Permutation Tests: The coin Package

Hothorn, Torsten, Hornik, Kurt, van de Wiel, Mark A., Zeileis, Achim January 2007 (has links) (PDF)
The R package coin implements a unified approach to permutation tests providing a huge class of independence tests for nominal, ordered, numeric, and censored data as well as multivariate data at mixed scales. Based on a rich and flexible conceptual framework that embeds different permutation test procedures into a common theory, a computational framework is established in coin that likewise embeds the corresponding R functionality in a common S4 class structure with associated generic functions. As a consequence, the computational tools in coin inherit the flexibility of the underlying theory and conditional inference functions for important special cases can be set up easily. Conditional versions of classical tests - such as tests for location and scale problems in two or more samples, independence in two- or three-way contingency tables, or association problems for censored, ordered categorical or multivariate data - can be easily be implemented as special cases using this computational toolbox by choosing appropriate transformations of the observations. The paper gives a detailed exposition of both the internal structure of the package and the provided user interfaces. / Series: Research Report Series / Department of Statistics and Mathematics
304

Determinants of Foreign Direct Investment: A panel data analysis of the MINT countries

Göstas Escobar, Alexandra, Fanbasten, Niko January 2016 (has links)
One of the most visible signs of the globalization of the world economy is the increase of Foreign Direct Investment (FDI) inflows across countries. This past decade the trend of FDI has shifted from developed countries to emerging economies, which is most notably in the BRICS countries. However, as BRICS reputation has been damaged these past years due to its weak growth outlook in the early 2010s, investors are shifting to the new economic grouping acronym, the MINT (Mexico, Indonesia, Nigeria and Turkey) countries for better future prospects of FDI destination. Since the MINT countries have emerged as a popular destination of FDI, it is necessary to investigate what are the key factors that make these four countries attractive as FDI destinations. Hence, this paper analyzes what are the determinants of inward FDI into the MINT countries during the time period from 1990 to 2014. To be able to answer the research question and demonstrate the effect of the seven independent variables (market size, economic instability, natural resources availability, infrastructure facilities, trade openness, institutional stability and political stability) on FDI as a dependent variable, the study uses a panel data analysis. The data is based on secondary data, which is collected from the World Bank dataset. The empirical finding from the study illustrates that market size, economic instability, infrastructure facilities, trade openness, institutional stability, and political stability are significant as determinants FDI inflows to the MINT countries, meanwhile, natural resources availability appears to be an insignificant determinant of FDI inflows to the MINT countries.
305

Computational Models of Nuclear Proliferation

Frankenstein, William 01 May 2016 (has links)
This thesis utilizes social influence theory and computational tools to examine the disparate impact of positive and negative ties in nuclear weapons proliferation. The thesis is broadly in two sections: a simulation section, which focuses on government stakeholders, and a large-scale data analysis section, which focuses on the public and domestic actor stakeholders. In the simulation section, it demonstrates that the nonproliferation norm is an emergent behavior from political alliance and hostility networks, and that alliances play a role in current day nuclear proliferation. This model is robust and contains second-order effects of extended hostility and alliance relations. In the large-scale data analysis section, the thesis demonstrates the role that context plays in sentiment evaluation and highlights how Twitter collection can provide useful input to policy processes. It first highlights the results of an on-campus study where users demonstrated that context plays a role in sentiment assessment. Then, in an analysis of a Twitter dataset of over 7.5 million messages, it assesses the role of ‘noise’ and biases in online data collection. In a deep dive analyzing the Iranian nuclear agreement, we demonstrate that the middle east is not facing a nuclear arms race, and show that there is a structural hole in online discussion surrounding nuclear proliferation. By combining both approaches, policy analysts have a complete and generalizable set of computational tools to assess and analyze disparate stakeholder roles in nuclear proliferation.
306

Getting Things in Order: An Introduction to the R package seriation

Hahsler, Michael, Hornik, Kurt, Buchta, Christian January 2007 (has links) (PDF)
Seriation, i.e., finding a linear order for a set of objects given data and a loss or merit function, is a basic problem in data analysis. Caused by the problem's combinatorial nature, it is hard to solve for all but very small sets. Nevertheless, both exact solution methods and heuristics are available. In this paper we present the package seriation which provides the infrastructure for seriation with R. The infrastructure comprises data structures to represent linear orders as permutation vectors, a wide array of seriation methods using a consistent interface, a method to calculate the value of various loss and merit functions, and several visualization techniques which build on seriation. To illustrate how easily the package can be applied for a variety of applications, a comprehensive collection of examples is presented. / Series: Research Report Series / Department of Statistics and Mathematics
307

Statistické srovnání výsledků perkutánních, ureteroskopických a robotických operací pro obstrukci ureteropelvické junkce. / Statistical evaluation of percutan, ureteroscopic a robotic surgeries of ureteropelvic obstruction

Masarovičová, Martina January 2008 (has links)
The aim of this diploma thesis is statistical processing of a sample of patients that have been hospitalized and treated for ureteropelvic junction obstruction at the urological department of ÚNV Prague in last 20 years and to determine the optimal treatment method. Evaluation of surgical techniques from the surgical and economical point of creates a comprehensive image of advantages and disadvantages connected with application of a particular method and enables all participating subjects to decide in case of doubt. In this case the statistical analysis is a proper instrument, leading to find answers, however, it also gives an opportunity for discussion.
308

NETWORK AND TOPOLOGICAL ANALYSIS OF SCHOLARLY METADATA: A PLATFORM TO MODEL AND PREDICT COLLABORATION

Lance C Novak (7043189) 15 August 2019 (has links)
The scale of the scholarly community complicates searches within scholarly databases, necessitating keywords to index the topics of any given work. As a result, an author’s choice in keywords affects the visibility of each publication; making the sum of these choices a key representation of the author’s academic profile. As such the underlying network of investigators are often viewed through the lens of their keyword networks. Current keyword networks connect publications only if they use the exact same keyword, meaning uncontrolled keyword choice prevents connections despite semantic similarity. Computational understanding of semantic similarity has already been achieved through the process of word embedding, which transforms words to numerical vectors with context-correlated values. The resulting vectors preserve semantic relations and can be analyzed mathematically. Here we develop a model that uses embedded keywords to construct a network which circumvents the limitations caused by uncontrolled vocabulary. The model pipeline begins with a set of faculty, the publications and keywords of which are retrieved by SCOPUS API. These keywords are processed and then embedded. This work develops a novel method of network construction that leverages the interdisciplinarity of each publication, resulting in a unique network construction for any given set of publications. Postconstruction the network is visualized and analyzed with topological data analysis (TDA). TDA is used to calculate the connectivity and the holes within the network, referred to as the zero and first homology. These homologies inform how each author connects and where publication data is sparse. This platform has successfully modelled collaborations within the biomedical department at Purdue University and provides insight into potential future collaborations.
309

[en] THE DEMOCRATIC ELITISM AND DISCOURSES OF THE BRAZILIAN SUPREME COURT / [pt] O ELITISMO DEMOCRÁTICO E DISCURSOS DO STF

SHANDOR TOROK MOREIRA 08 January 2013 (has links)
[pt] Como o Supremo Tribunal Federal reconstrói a relação entre Estado e Cidadania no Brasil contemporâneo, especialmente no que diz respeito à democracia nacional? Com apoio em dois modelos teóricos sobre a democracia, o elitismo democrático e os públicos participativos, a dissertação investigou o discurso público produzido pelo STF ao julgar determinados casos, identificando indícios de abuso de poder discursivo pela Corte nos mesmos. O referido abuso de poder discursivo é caracterizado pela influência do marco teórico do elitismo democrático e seu consequente potencial de reproduzir e reforçar desenho institucional servil ao repertório de ação não universalizável da elite política nacional. / [en] How the Brazilian Supreme Court (BSC) reconstructs the relation between State and Citizenship in contemporary Brazil, especially concerning the national democracy? The public discourse manufactured by the BSC whilst deciding certain cases was investigated through the lenses of two theoretical models of democracy, democratic elitism and participatory publics, in search for evidences of discourse power abuse. Such abuse is characterized by the influence of the democratic elitism framework and its potential to reproduce and reinforce an institutional design unable to counteract the problematic action repertoir of the Brazilian political elite.
310

Data analysis and visualization of the 360degrees interactional datasets

Lozano Prieto, David January 2019 (has links)
Nowadays, there has been an increasing interest in using 360degrees video in medical education. Recent efforts are starting to explore how nurse students experience and interact with 360degrees videos. However, once these interactions have been registered in a database, there is a lack of ways to analyze these data, which generates a necessity of creating a reliable method that can manage all this collected data, and visualize the valuable insights of the data. Hence, the main goal of this thesis is to address this challenge by designing an approach to analyze and visualize this kind of data. This will allow teachers in health care education, and medical specialists to understand the collected data in a meaningful way. To get the most suitable solution, several meetings with nursing teachers took place to draw the first draft structure of an application which acts as the needed approach. Then, the application was used to analyze collected data in a study made in December. Finally, the application was evaluated through a questionnaire that involved a group of medical specialists related to education. The initial outcome from those testing and evaluations indicate that the application successfully achieves the main goals of the project, and it has allowed discussing some ideas that will help in the future to improve the 360degrees video experience and evaluation in the nursing education field providing an additional tool to analyze, compare and assess students.

Page generated in 0.0712 seconds