• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19609
  • 3369
  • 2417
  • 2007
  • 1551
  • 1432
  • 877
  • 406
  • 390
  • 359
  • 297
  • 233
  • 208
  • 208
  • 208
  • Tagged with
  • 38072
  • 12454
  • 9246
  • 7099
  • 6697
  • 5896
  • 5273
  • 5190
  • 4719
  • 3445
  • 3299
  • 2801
  • 2725
  • 2533
  • 2115
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

The analysis of the structure of systems

Steward, Donald V. January 1973 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1973. / Typescript. Vita. Description based on print version record. Includes bibliographical references (leaves 339-355).
242

A model for file structure determination for large on-line data files

Martin, Laurence David. January 1968 (has links)
Thesis (M. S.)--Washington State University.
243

Information Aggregation using the Cameleon# Web Wrapper

Firat, Aykut, Madnick, Stuart, Yahaya, Nor Adnan, Kuan, Choo Wai, Bressan, Stéphane 29 July 2005 (has links)
Cameleon# is a web data extraction and management tool that provides information aggregation with advanced capabilities that are useful for developing value-added applications and services for electronic business and electronic commerce. To illustrate its features, we use an airfare aggregation example that collects data from eight online sites, including Travelocity, Orbitz, and Expedia. This paper covers the integration of Cameleon# with commercial database management systems, such as MS SQL Server, and XML query languages, such as XQuery.
244

Personal data protection maturity model for the micro financial sector in Peru

Garcia, Arturo, Calle, Luis, Raymundo, Carlos, Dominguez, Francisco, Moguerza, Javier M. 27 June 2018 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / The micro financial sector is a strategic element in the economy of developing countries since it facilitates the integration and development of all social classes and let the economic growth. In this point is the growth of data is high every day in sector like the micro financial, resulting from transactions and operations carried out with these companies on a daily basis. Appropriate management of the personal data privacy policies is therefore necessary because, otherwise, it will comply with personal data protection laws and regulations and let take quality information for decision-making and process improvement. The present study proposes a personal data protection maturity model based on international standards of privacy and information security, which also reveals personal data protection capabilities in organizations. Finally, the study proposes a diagnostic and tracing assessment tool that was carried out for five companies in the micro financial sector and the obtained results were analyzed to validate the model and to help in success of data protection initiatives. / Revisión por pares
245

Master data management maturity model for the microfinance sector in Peru

Vásquez Zúñiga, Daniel, Kukurelo Cruz, Romina, Raymundo Ibañez, Carlos, Dominguez, Francisco, Moguerza, Javier January 2018 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / The microfinance sector has a strategic role since they facilitate integration and development of all social classes to sustained economic growth. In this way the actual point is the exponential growth of data, resulting from transactions and operations carried out with these companies on a daily basis, becomes imminent. Appropriate management of this data is therefore necessary because, otherwise, it will result in a competitive disadvantage due to the lack of valuable and quality information for decision-making and process improvement. The Master Data Management (MDM) give a new way in the Data management, reducing the gap between the business perspectives versus the technology perspective In this regard, it is important that the organization have the ability to implement a data management model for Master Data Management. This paper proposes a Master Data management maturity model for microfinance sector, which frames a series of formal requirements and criteria providing an objective diagnosis with the aim of improving processes until entities reach desired maturity levels. This model was implemented based on the information of Peruvian microfinance organizations. Finally, after validation of the proposed model, it was evidenced that it serves as a means for identifying the maturity level to help in the successful of initiative for Master Data management projects. / Revisión por pares
246

Spatio-Temporal Data Analysis by Transformed Gaussian Processes

Yan, Yuan 06 December 2018 (has links)
In the analysis of spatio-temporal data, statistical inference based on the Gaussian assumption is ubiquitous due to its many attractive properties. However, data collected from different fields of science rarely meet the assumption of Gaussianity. One option is to apply a monotonic transformation to the data such that the transformed data have a distribution that is close to Gaussian. In this thesis, we focus on a flexible two-parameter family of transformations, the Tukey g-and-h (TGH) transformation. This family has the desirable properties that the two parameters g ∈ R and h ≥ 0 involved control skewness and tail-heaviness of the distribution, respectively. Applying the TGH transformation to a standard normal distribution results in the univariate TGH distribution. Extensions to the multivariate case and to a spatial process were developed recently. In this thesis, motivated by the need to exploit wind as renewable energy, we tackle the challenges of modeling big spatio-temporal data that are non-Gaussian by applying the TGH transformation to different types of Gaussian processes: spatial (random field), temporal (time series), spatio-temporal, and their multivariate extensions. We explore various aspects of spatio-temporal data modeling techniques using transformed Gaussian processes with the TGH transformation. First, we use the TGH transformation to generate non-Gaussian spatial data with the Matérn covariance function, and study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters in the Matérn covariance via a sophisticatedly designed simulation study. Second, we build two autoregressive time series models using the TGH transformation. One model is applied to a dataset of observational wind speeds and shows advantaged in accurate forecasting; the other model is used to fit wind speed data from a climate model on gridded locations covering Saudi Arabia and to Gaussianize the data for each location. Third, we develop a parsimonious spatio-temporal model for time series data on a spatial grid and utilize the aforementioned Gaussianized climate model wind speed data to fit the latent Gaussian spatio-temporal process. Finally, we discuss issues under a unified framework of modeling multivariate trans-Gaussian processes and adopt one of the TGH autoregressive models to build a stochastic generator for global wind speed.
247

Estudo para implantação de um Data Warehouse em um ambiente empresarial

Wagner, Cláudio Arruda January 2003 (has links)
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Ciências da Computação. / Made available in DSpace on 2012-10-20T14:20:36Z (GMT). No. of bitstreams: 1 196929.pdf: 2201588 bytes, checksum: 8a9dcaea961966b35ec4ecd086e8e1c8 (MD5)
248

Detecting stochastic motifs in network and sequence data for human behavior analysis

Liu, Kai 26 August 2014 (has links)
With the recent advent of Web 2.0, mobile computing, and pervasive sensing technologies, human activities can readily be logged, leaving digital traces of di.erent forms. For instance, human communication activities recorded in online social networks allow user interactions to be represented as “network” data. Also, human daily activities can be tracked in a smart house, where the log of sensor triggering events can be represented as “sequence” data. This thesis research aims to develop computational data mining algorithms using the generative modeling approach to extract salient patterns (motifs) embedded in such network and sequence data, and to apply them for human behavior analysis. Motifs are de.ned as the recurrent over-represented patterns embedded in the data, and have been known to be e.ective for characterizing complex networks. Many motif extraction methods found in the literature assume that a motif is either present or absent. In real practice, such salient patterns can appear partially due to their stochastic nature and/or the presence of noise. Thus, the probabilistic approach is adopted in this thesis to model motifs. For network data, we use a probability matrix to represent a network motif and propose a mixture model to extract network motifs. A component-wise EM algorithm is adopted where the optimal number of stochastic motifs is automatically determined with the help of a minimum message length criterion. Considering also the edge occurrence ordering within a motif, we model a motif as a mixture of .rst-order Markov chains for the extraction. Using a probabilistic approach similar to the one for network motif, an optimal set of stochastic temporal network motifs are extracted. We carried out rigorous experiments to evaluate the performance of the proposed motif extraction algorithms using both synthetic data sets and real-world social network data sets and mobile phone usage data sets, and obtained promising results. Also, we found that some of the results can be interpreted using the social balance and social status theories which are well-known in social network analysis. To evaluate the e.ectiveness of adopting stochastic temporal network motifs for not only characterizing human behaviors, we incorporate stochastic temporal network motifs as local structural features into a factor graph model for followee recommendation prediction (essentially a link prediction problem) in online social networks. The proposed motif-based factor graph model is found to outperform signi.cantly the existing state-of-the-art methods for the prediction task. For extract motifs from sequence data, the probabilistic framework proposed for the stochastic temporal network motif extraction is also applicable. One possible way is to make use of the edit distance in the probabilistic framework so that the subsequences with minor ordering variations can .rst be grouped to form the initial set of motif candidates. A mixture model can then be used to determine the optimal set of temporal motifs. We applied this approach to extract sequence motifs from a smart home data set which contains sensor triggering events corresponding to some activities performed by residents in the smart home. The unique behavior extracted for each resident based on the detected motifs is also discussed. Keywords: Stochastic network motifs, .nite mixture models, expectation maxi­mization algorithms, social networks, stochastic temporal network motifs, mixture of Markov chains, human behavior analysis, followee recommendation, signed social networks, activity of daily living, smart environments
249

Schema quality analysis in a data integration system

BATISTA, Maria da Conceição Moraes 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:49:12Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Qualidade da Informação (QI) tem se tornado um aspecto crítico nas organizações e em pesquisas da área de sistemas de informação. Informações de pouca qualidade podem ter impactos negativos na efetividade de uma organização. O crescimento do uso de data warehouses e acesso direto de gerentes e usários a informações obtidas de várias fontes contribuíram para o crescimento da necessidade de qualidade nas informações das empresas. A noção de QI em sistemas de informação emergiu nos últimos e vem sendo alvo de interesse cada vez maior. Não existe ainda um acordo comum acerca de uma definição da QI. Apenas um consenso de que tratase de um conceito de adequação ao uso . A informação é considerada apropriada para o uso dentro da perspectiva dos requisitos e necessidades de um usuário, ou seja, a qualidade da informação depende de sua utilidade. O acesso integrado a informações distribuídas em múltiplas fontes de dados heterogêneas, distribuídas e autônomas é um problema importante a ser resolvido em muitos domínios de aplicações. Tipicamente existem algumas formas de se obter respostas a consultas globais, sobre dados em fontes diferentes com diferentes combinações. entretanto é bastante custoso obter todas as respostas possíveis. Enquanto muita pesquisa tem sido feita em relação a processamento de consultas e seleção de planos com critérios de custo, pouco se conhece com relação ao problema de incorporar aspectos de QI em esquemas globais de sistemas de integração de dados. Neste trabalho, nós propomos a análise da QI em um sistema de integração de dados, mais especificamente a qualidade dos esquemas do sistema. O nosso principal objetivo é melhorar a qualidade da execução das consultas. Nossa proposta baseiasse na hipótese de que uma alternativa de otimizar o processamento de consultas seria a construção de esquemas com altos escores de QI. Assim, o foco deste trabalho está no desenvolvimento de mecanismos de análise da QI voltados esquemas de integração de dados, especialmente o esquema global. Inicialmente, nós construímos uma lista de critérios de QI e relacionamos estes critérios com os elementos existentes em sistemas de integração de dados. Em seguida, direcionamos o foco para o esquema integrado e especificamos formalmente critérios de qualidade de esquemas minimalidade, completude do esquema e consistência de tipo. Também especificamos um algoritmo de execução de ajustes de forma a melhorar a minimalidade e algoritmos para medir a consistência de tipo nos esquemas. Com esses experimentos conseguimos mostrar que o tempo de execução de uma consulta em um sistema de integração de dados pode diminuir se esta consulta for submetida a um esquema com escores altos de minimalidade e consistência de tipo
250

TELEMETRY AND DATA LOGGING IN A FORMULA SAE RACE CAR

Schultz, Aaron 10 1900 (has links)
The problem with designing and simulating a race car entirely through CAD and other computer simulations, is that the real world behavior of the car will differ from the results outputted from CFD and FEA analysis. One way to learn more about how the car actually handles, is through telemetry and data logging of many different sensors on the car while it is running at racing speeds. This data can help the engineering team build new components, and tune the many different systems on the car in order to get the fastest time around a track as possible.

Page generated in 0.0783 seconds