• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 22
  • 22
  • 21
  • 13
  • 12
  • 7
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 347
  • 347
  • 64
  • 62
  • 59
  • 52
  • 48
  • 46
  • 41
  • 41
  • 40
  • 36
  • 35
  • 33
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

AUTOMATED DATA MANAGEMENT IN A HIGH-VOLUME TELEMETRY DATA PROCESSING ENVIRONMENT

Griffin, Alan R., Wooten, R. Stephen 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / The vast amount of data telemetered from space probe experiments requires careful management and tracking from initial receipt through acquisition, archiving, and distribution. This paper presents the automated system used at the Phillips Laboratory, Geophysics Directorate, for tracking telemetry data from its receipt at the facility to its distribution on various media to the research community. Features of the system include computerized databases, automated generation of media labels, automated generation of reports, and automated archiving.
2

The integration of product data with workflow management systems through a common data model

Kovács, Zsolt January 1999 (has links)
No description available.
3

An integrated data system for wildlife management

Kale, Lorne Wayne January 1979 (has links)
ID 1975 the British Columbia Fish and Wildlife Branch implemented the Management Unit system for controlling and monitoring wildlife harvests in the province. This change in management boundaries should have been accompanied by an intensified data handling system, so that accurate and reliable management indices could be produced for each M.U. This thesis describes a data system that was developed in response to Region 1 blacktailed deer management needs and offers a new approach to wildlife data system management. The proposed system integrates field contact and hunter questionnaire data, and allows managers to monitor the effects of their policy decisions. Management strategies can be tested by manipulating exploitation parameters, such as bag limits and season lengths, to determine their effect on specific wildlife populations. In addition, the system restores and upgrades obsolete data files, thus allowing past harvest trends to be applied to new management zones. Flexibility, for both anticipated changes in resource stratification and unanticipated data needs, is also preserved. Biologists require management estimates for specific areas within M.U.s to manage wildlife effectively at the M.U. level. Each of the 15 M.U.s in Region 1 have been subdivided into between 5 and 32 subunits, depending on area and geography. The total 246 subunits attempt to partition large unmanageable wildlife resources into separate populations of manageable size. A location list or computerized gazetteer was used to automatically assign hunt location descriptions to appropriate M.U.s and subunits. Hew techniques for hunter sample estimates are proposed in this thesis. Mark-recapture methods for determining sampling intensities and the partitioning of large resident areas into resident M.U.s can improve estimates. Different methods for treating multiple mailing stage data are also presented. The data system described in this thesis consists of two parts; 1) the establishment of master data files and 2) the retrieval of data from those files. Five subsystems of PORTBAN computer programs control the input of Fish and Wildlife harvest data and manipulate them into master data files. The information retrieval is accomplished by standard statistical packages, such as SPSS. A hierarchial file structure is used to store the harvest data, thus most wildlife management data requests can be answered directly. The 1975 Region 1 blacktailed deer harvest data were used to test the sampling assumptions in both the hunter sample and field contact programs. Significant differences between resident M.U.s were found for hunter sample sampling intensity, percentage response, percentage sampled, and percentage of hunters among respondents. Significant differences were established in the percentage hunter success in different resident M.U.s and for different mailing phases. The 1975 field contact program produced a non-uniform distribution of contacts with respect to M.U.s. Highly significant differences between the percentage of licence holders checked from different resident M.U.s were also found. Kills for field checked hunters who also responded to the hunter sample questionnaire were compared to kills reported on the questionnaire. Numerous irregularities, including unreported kills, misreported kills, and totals exceeding bag limits, were found and a minimum error rate of about 20% was calculated. Known buck kills were generally (87.9%) reported as bucks, while does were only reported correctly 74.% of the time, and fawns only 48.0%. The format of the 1975 deer hunter questionnaire is suspected to have influenced those error rates. Successful and unsuccessful hunters had different probabilities of responding to the hunter questionnaire. Only 48.0% of unsuccessful hunters responded, while 59.6% of successful hunters reported. Hunter sample harvest estimates using different estimation methods were compared to known kills in two Vancouver Island subunits. During the 1975 season, 88 deer were shot in subunit 1-5-3 Nanaimo River), while 140 were estimated to have been shot in subunit 1-5-7 (Northwest Bay), all estimated kills were considerable higher than the known harvest, with the marked success-phase mailing estimation method producing the lowest estimates — 170 deer (193%) for subunit 15-3 and 179 deer (127%) for subunit 1-5-7. Although the total estimated deer kill for Vancouver Island remained relatively constant from 1964 to 1974, the same data when analysed by M.U. and subunit showed decreasing harvests in some M.U.s and subunits which were balanced by increasing kills in others., The data system proposed in this thesis provides an opportunity for B.C. wildlife management to develop an effective management framework for B.C.'s valuable wildlife resources. However, to do so the proposed system or one with similar capabilities must be implemented and supported by the B. C. Fish and Wildlife Branch. / Land and Food Systems, Faculty of / Graduate
4

Shannon’s information theory in hydrologic network design and estimation

Husain, Tahir January 1979 (has links)
The hydrologic basin and its data collection network is treated as a communication system. The spatial and temporal characteristics of the hydrologic events throughout the basin are represented as a message source and this message is transmitted by the network stations to a data base. A measure of the basin information transmitted by the hydrologic network is derived using Shannon's multivariate information. An optimum network station selection criterion, based on Shannon's methodology, is established and is shown to be independent of the estimation of the events at ungauged locations. Multivariate information transmission for the hydrologic network is initially computed using the discrete entropy concept. The computation of the multivariate entropy is then extended to the case of variables represented by continuous distributions. Bivariate and multivariate forms of the normal and lognormal distributions and the bivariate form of gamma, extreme value and exponential probability density functions are considered. Computational requirements are substantial when dealing with large numbers of grid points in the basin representation, and in the combinatorial search for optimum networks. Computational aids are developed which reduce the computational load to a practical level. The performance of optimal information transmission networks is compared with networks designed by existing methods. The ability of Shannon's theory to cope with the multivariate nature of the output from a network is shown to provide network designs with generally superior estimation performance. Although the optimal information transmission criterion avoids the necessity of specifying the estimators for events at ungauged locations, the criterion can also be applied to the determination of optimal estimators. The applicability of the information transmission criterion in determining optimal estimation parameters is demonstrated for simple and multiple linear regression and Kalman filter estimation. Information transmission criterion is applied to design the least cost network where a choice of instrument precision exists. / Applied Science, Faculty of / Civil Engineering, Department of / Graduate
5

Comparative evaluation of microarray-based gene expression databases

Do, Hong-Hai, Kirsten, Toralf, Rahm, Erhard 11 December 2018 (has links)
Microarrays make it possible to monitor the expression of thousands of genes in parallel thus generating huge amounts of data. So far, several databases have been developed for managing and analyzing this kind of data but the current state of the art in this field is still early stage. In this paper, we comprehensively analyze the requirements for microarray data management. We consider the various kinds of data involved as well as data preparation, integration and analysis needs. The identified requirements are then used to comparatively evaluate eight existing microarray databases described in the literature. In addition to providing an overview of the current state of the art we identify problems that should be addressed in the future to obtain better solutions for managing and analyzing microarray data.
6

Data Management and Curation: Services and Resources

Kollen, Christine, Bell, Mary 18 October 2016 (has links)
Poster from University of Arizona 2016 IT Summit / Are you or the researchers you work with writing a grant proposal that requires a data management plan? Are you working on a research project and have questions about how to effectively and efficiently manage your research data? Are you interested in sharing your data with other researchers? We can help! For the past several years, the University of Arizona (UA) Libraries, in collaboration with the Office of Research and Discovery and the University Information Technology Services, has been providing data management services and resources to the campus. We are interested in tailoring our services and resources to what you need. We conducted a research data management survey in 2014 and are currently working on the Data Management and Data Curation and Publication (DMDC) pilot. This poster will describe what data management and curation services we are currently providing, and ask for your feedback on potential new data management services and resources.
7

Master data management maturity model for the successful of mdm initiatives in the microfinance sector in Peru

Vásquez D., Vásquez, Daniel, Kukurelo, Romina, Raymundo, Carlos, Dominguez, Francisco, Moguerza, Javier 04 1900 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / The microfinance sector has a strategic role since they facilitate integration and development of all social classes to sustained economic growth. In this way the actual point is the exponential growth of data, resulting from transactions and operations carried out with these companies on a daily basis, becomes imminent. Appropriate management of this data is therefore necessary because, otherwise, it will result in a competitive disadvantage due to the lack of valuable and quality information for decision-making and process improvement. The Master Data Management (MDM) give a new way in the Data management, reducing the gap between the business perspectives versus the technology perspective In this regard, it is important that the organization have the ability to implement a data management model for Master Data Management. This paper proposes a Master Data management maturity model for microfinance sector, which frames a series of formal requirements and criteria providing an objective diagnosis with the aim of improving processes until entities reach desired maturity levels. This model was implemented based on the information of Peruvian microfinance organizations. Finally, after validation of the proposed model, it was evidenced that it serves as a means for identifying the maturity level to help in the successful of initiative for Master Data management projects. / Revisión por pares
8

Relational Database for Visual Data Management

Lord, Dale 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Often it is necessary to retrieve segments of video with certain characteristics, or features, from a large archive of footage. This paper discusses how image processing algorithms can be used to automatically create a relational database, which indexes the video archive. This feature extraction can be performed either upon acquisition or in post processing. The database can then be queried to quickly locate and recover video segments with certain specified key features
9

Data Quality Through Active Constraint Discovery and Maintenance

Chiang, Fei Yen 10 December 2012 (has links)
Although integrity constraints are the primary means for enforcing data integrity, there are cases in which they are not defined or are not strictly enforced. This leads to inconsistencies in the data, causing poor data quality. In this thesis, we leverage the power of constraints to improve data quality. To ensure that the data conforms to the intended application domain semantics, we develop two algorithms focusing on constraint discovery. The first algorithm discovers a class of conditional constraints, which hold over a subset of the relation, under specific conditional values. The second algorithm discovers attribute domain constraints, which bind specific values to the attributes of a relation for a given domain. These two types of constraints have been shown to be useful for data cleaning. In practice, weak enforcement of constraints often occurs for performance reasons. This leads to inconsistencies between the data and the set of defined constraints. To resolve this inconsistency, we must determine whether it is the constraints or the data that is incorrect, and then make the necessary corrections. We develop a repair model that considers repairs to the data and repairs to the constraints on an equal footing. We present repair algorithms that find the necessary repairs to bring the data and the constraints back to a consistent state. Finally, we study the efficiency and quality of our techniques. We show that our constraint discovery algorithms find meaningful constraints with good precision and recall. We also show that our repair algorithms resolve many inconsistencies with high quality repairs, and propose repairs that previous algorithms did not consider.
10

Querying, Exploring and Mining the Extended Document

Sarkas, Nikolaos 31 August 2011 (has links)
The evolution of the Web into an interactive medium that encourages active user engagement has ignited a huge increase in the amount, complexity and diversity of available textual data. This evolution forces us to re-evaluate our view of documents as simple pieces of text and of document collections as immutable and isolated. Extended documents published in the context of blogs, micro-blogs, on-line social networks, customer feedback portals, can be associated with a wealth of meta-data in addition to their textual component: tags, links, sentiment, entities mentioned in text, etc. Collections of user-generated documents grow, evolve, co-exist and interact: they are dynamic and integrated. These unique characteristics of modern documents and document collections present us with exciting opportunities for improving the way we interact with them. At the same time, this additional complexity combined with the vast amounts of available textual data present us with formidable computational challenges. In this context, we introduce, study and extensively evaluate an array of effective and efficient solutions for querying, exploring and mining extended documents, dynamic and integrated document collections. For collections of socially annotated extended documents, we present an improved probabilistic search and ranking approach based on our growing understanding of the dynamics of the social annotation process. For extended documents, such as blog posts, associated with entities extracted from text and categorical attributes, we enable their interactive exploration through the efficient computation of strong entity associations. Associated entities are computed for all possible attribute value restrictions of the document collection. For extended documents, such as user reviews, annotated with a numerical rating, we introduce a keyword-query refinement approach. The solution enables the interactive navigation and exploration of large result sets. We extend the skyline query to document streams, such as news articles, associated with categorical attributes and partially ordered domains. The technique incrementally maintains a small set of recent, uniquely interesting extended documents from the stream.Finally, we introduce a solution for the scalable integration of structured data sources into Web search. Queries are analysed in order to determine what structured data, if any, should be used to augment Web search results.

Page generated in 0.0799 seconds