• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 874
  • 412
  • 156
  • 84
  • 79
  • 35
  • 27
  • 18
  • 17
  • 16
  • 14
  • 13
  • 10
  • 8
  • 8
  • Tagged with
  • 2105
  • 2105
  • 548
  • 431
  • 430
  • 382
  • 380
  • 204
  • 192
  • 167
  • 162
  • 160
  • 156
  • 148
  • 147
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

The nonhomogeneous Poisson process with covariate effects /

Shih, Li-Hsing, January 1991 (has links)
Thesis (Ph.D.)--University of Oklahoma, 1991. / Includes bibliographical references (leaves 148-153).
142

Improved testing and data analysis procedures for the Rolling Dynamic Deflectometer

Nam, Boo Hyun 17 December 2012 (has links)
A Rolling Dynamic Deflectometer (RDD) is a nondestructive testing device for determining continuous deflection profiles of pavements. Unlike discrete testing methods, the RDD performs continuous measurements. The ability to perform continuous measurements makes the RDD a powerful screening/evaluation tool for quickly characterizing large sections of pavement, with little danger of missing critical pavement features. RDD testing applications have involved pavement forensic investigations, delineations of areas to be repaired, selection of rehabilitation treatments, measurements of relative improvements due to the rehabilitation, and monitoring of changes with time (trafficking and environmental loading). However, the speed of RDD testing with the current rolling sensors is between 1 and 2 mph (1.6 to 3.2 km/hr). Improvements in testing speed and data analysis procedures would increase its usefulness in project-level studies as well as permit its used in some pavement network-level studies. A three-part study was carried out to further improve the RDD. The first part involved the development of speed-improved rolling sensors (referred as the third-generation rolling sensor). Key benefits of this new rolling sensor are: (1) increased testing speed up to 5 mph (8.0 km/hr), and (2) reduced level of rolling noise during RDD measurements. With this rolling sensor, the RDD can collect more deflection measurements at a speed of 3 to 5 mph (4.8 to 8.0 km/hr). Field trials using the first- and third-generation rolling sensors on both flexible and rigid pavements were performed to evaluate the performance of the third-generation rolling sensor. The second part of this study involved enhancements to the RDD data analysis procedure. An alternative data analysis method was developed for the third-generation rolling sensor. This new analysis method produces results at higher speeds that are comparable to the existing analysis method used for testing at 1 to 2 mph (1.6 to 3.2 km/hr). Key benefits of this analysis method that were not previously available are: (1) distance-based deflection profiles (report RDD deflections based on a selected distance interval), (2) improved-spatial resolution without sacrificing the filtering performance, and (3) analysis of the rolling noise characteristics and signal-to-noise and distortion ratios better characterize the deflection profiles and their accuracy. The third part of this study involved investigating the effects of parameters affecting RDD deflection measurements which include: (1) force level and operating frequency, (2) in-situ sensor calibration, (3) load-displacement curve, and (4) pavement temperature variations. These parameters need to be considered in testing and data analysis procedures of the RDD because small errors from these parameters can adversely influence calculations of the RDD deflections. Criteria are presented for selecting the best operating parameters for testing. / text
143

Using C# and WPF to Create Fast Plots for Telemetry Analysis on Large Data Sets

Burns, Steven, Endress, William 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / Upon completion of a test where telemetry(TM) data was collected, the resulting TM file will usually contain millions of data points. Traditionally MatLab™, Mathlab™, or some third party software is used to plot the data. These methods may not always be desirable due to the expense of licensing, restrictions on the ability to create custom graphs and the inability to quickly plot large amounts of data. These problems were solved by using Windows Presentation Foundation (WPF) graphics capabilities in conjunction with C# to develop a unique set of algorithms to display custom graphs of unlimited size with quick response.
144

IRIG 106 Chapter 10 vs. iNET Packetization: Data Storage and Retrieval

Jones, Charles H. 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / The approach to recording data during Test & Evaluation has evolved dramatically over the decades. A simple, traditional approach is to pull all data into a PCM format and record that. A common current approach is to record data in an IRIG 106 Chapter 10 compliant format that records different forms of data (bus, discrete, video, etc.) in different channels of the recorder or exported data file. With network telemetry on the horizon, in the form of the integrated Network Enhanced Telemetry (iNET) standards, much of the data will be transported in iNET messages via Ethernet frames. These messages can potentially carry any type of data from any source. How do we record this data? Ultimately, no matter how the data is stored, it must be translated into a form that can be used for data analysis. Data storage forms that are conducive to this analysis are not necessarily the same that are conducive to real time recording. This paper discusses options and tradeoffs of different approaches to incorporating iNET data structures into the existing T&E architecture.
145

Light Curves of Type Ia Supernovae and Preliminary Cosmological Constraints from the ESSENCE Survey

Narayan, Gautham Siddharth 30 September 2013 (has links)
The ESSENCE survey discovered 213 type Ia supernovae at redshifts 0.10 < z < 0.81 between 2002 and 2008. We present their R and I band light curve measurements, obtained using the MOSAIC II imager at the CTIO 4 m, along with rapid response spectroscopy for each object from a range of large aperture ground based telescopes. We detail our program to obtain quantitative classifications and precise redshifts from our spectroscopic follow-up of each object. We describe our efforts to improve the precision of the calibration of the CTIO 4 m natural photometric system. We use several empirical metrics to measure our internal photometric consistency and our absolute calibration of the survey. We assess the effect of various sources of systematic error on our measured fluxes, and estimate that the total systematic error budget from the photometric calibration is ~1%. We combine 108 ESSENCE SNIa that pass stringent quality cuts with a compilation of 441 SNIa from 3 year results presented by the Supernova Legacy Survey and Baryon Acoustic Oscillation measurements from the Sloan Digital Sky Survey to produce preliminary cosmological constraints employing the SNIa . This constitutes the largest sample of well-calibrated, spectroscopically confirmed SNIa to date. Assuming a flat Universe, we obtain a joint constraint of \(\Omega_M = 0.266^{+0.026}_{-0.016}(stat 1\sigma)\), and \(w = -1.112^{+0.069}_{-0.072}(stat 1\sigma)\). These measurements are consistent with a cosmological constant. / Physics
146

Modelling multivariate interval-censored and left-truncated survival data using proportional hazards model

Cheung, Tak-lun, Alan, 張德麟 January 2003 (has links)
published_or_final_version / abstract / toc / Statistics and Actuarial Science / Master / Master of Philosophy
147

Ανάλυση οικονομικών δεδομένων με χρήση τεχνικών εξόρυξης

Ζαβουδάκης, Γεώργιος 19 May 2015 (has links)
Μετά την μεγάλη έξαρση της τεχνολογικής ανάπτυξης ο όγκος των δεδομένων-πληροφοριών σήμερα είναι τεράστιος και όσο περνάνε τα χρόνια θα μεγαλώνει ακόμα περισσότερο. Είναι βέβαιο λοιπόν ότι ζούμε στην κοινωνία της πληροφορίας, όπου η μετατροπή των δεδομένων σε πληροφορία απαιτείται να οδηγεί στη μετατροπή της πληροφορίας σε γνώση. Έτσι δημιουργήθηκε η ανάγκη επεξεργασίας αυτών των δεδομένων και η μετατροπή τους σε χρήσιμες πληροφορίες που θα βοηθήσουν στην λήψη αποφάσεων. Οι τεχνικές εξόρυξης αποτελούν ένα σημαντικό εργαλείο που μας βοηθά να αντλήσουμε γνώση από μεγάλους όγκους δεδομένων και αν σκεφτούμε ότι όλα αυτά μπορούν να συνδυαστούν με στατιστικές μεθόδους τότε εύκολα μπορούμε να κάνουμε ανάκτηση πληροφορίας. Η συνύπαρξη ετερόκλητων επιστημονικών πεδίων όπως της στατιστικής, της μηχανικής εκμάθησης, της θεωρίας της πληροφορίας και των υπολογιστικών διαδικασιών, έχει δημιουργήσει μια νέα επιστήμη με δυναμικά εργαλεία. Η επιστήμη αυτή καλείται «Εξόρυξη Δεδομένων (ΕΔ)» (Data Mining) και είναι μέρος της διαδικασίας «Ανακάλυψης Γνώσης από Βάσεις Δεδομένων» (Knowledge Discovery in Databases - KDD). Τα εργαλεία της ΕΔ είναι οι αλγόριθμοί της, οι οποίοι επιχειρούν να βρουν χρήσιμα και κατανοητά πρότυπα στα δεδομένα. Κύριος στόχος της παρούσας Διπλωματικής Εργασίας είναι η συγκέντρωση βασικών αλγορίθμων και μεθόδων που επιλέγουν και καθαρίζουν δεδομένα, αναγνωρίζουν πρότυπα, βελτιστοποιούν ένα σύστημα διαχείρισης και συσταδοποιούν δεδομένα. Θα δώσουμε έμφαση σε αλγορίθμους που είναι κατάλληλοι για χρονικά οικονομικά δεδομένα. Εκτός από την καταγραφή των μεθόδων και εφαρμογών της Εξόρυξης δεδομένων και της KDD, θα εφαρμόσουμε τεχνικές συσταδοποίησης σε ένα σύνολο δεδομένων, το οποίο περιλαμβάνει οικονομικά δεδομένα από τρεις διαφορετικές κατηγορίες: τιμές των μετοχών υψηλής κεφαλαιοποίησης του δείκτη Nasdaq , η διαχρονική ισοτιμία Ευρώ/δολλαρίου και η διαχρονική διαμόρφωση των τιμών του πετρελαίου/ανα βαρέλι στις διεθνείς αγορές.Η εργασία αυτή χωρίζεται σε πέντε κεφάλαια: Εισαγωγή, θεωρητικό υπόβαθρο, μεθοδολογία, υλοποίηση πρακτικής εφαρμογής και συμπεράσματα. Στο κεφάλαιο 1 κάνουμε μια πρώτη γνωριμία με την Εξόρυξη γνώσης από Δεδομένα ,στο κεφάλαιο 2 γίνεται η βιβλιογραφική ανασκόπηση και παρουσιάζεται αναλυτικά όλο το θεωρητικό υπόβαθρο των μεθόδων που θα χρησιμοποιηθούν. Στο κεφάλαιο 3 παρουσιάζονται οι μεθοδολογίες (μέθοδοι εξόρυξης για συσταδοποίηση, κατηγοριοποίηση και πρόβλεψη) που χρησιμοποιήθηκαν για τη μελέτη, ενώ στο επόμενο κεφάλαιο παρουσιάζεται μια πρακτική εφαρμογή των παραπάνω ως αποτελέσματα των μεθοδολογιών αυτών. Και τέλος, στο κεφάλαιο 5 παρουσιάζονται κάποια συμπεράσματα που μπορούμε να εξάγουμε από την υλοποίηση της πρακτικής εφαρμογής. Η εργασία αυτή έχει ως στόχο να αναδείξει την σχέση που μπορεί να υπάρξει ανάμεσα στην Οικονομική επιστήμη και σε αυτήν της Τεχνητής Νοημοσύνης, εστιάζοντας κυρίως στο κατά πόσο η δεύτερη μπορεί να δώσει λύσεις σε καίρια ζητήματα, προβλήματα αλλά και προκλήσεις που παρουσιάζονται στο σύγχρονο οικονομικό περιβάλλον. Το μέσο για την εκπλήρωση αυτού του στόχου είναι οι τεχνικές Data Mining, που στα ελληνικά σαν όρος, αποδίδονται ως Τεχνικές Εξόρυξης Δεδομένων. Για την υλοποίηση της εργασίας αυτής, σαν πηγές χρησιμοποιήθηκαν πολλά επιστημονικά βιβλία που σχετίζονται με την Οικονομία, τα Χρηματοοικονομικά, την Τεχνητή Νοημοσύνη και τις μεθόδους Data Mining, τις Πολυκριτήριες Τεχνικές Ταξινόμησης αλλά και την Στατιστική. Το αποτέλεσμα από τον συνδυασμό των παραπάνω θα παρουσιαστεί στις σελίδες που θα ακολουθήσουν. / After the great upsurge of technological development the volume of currently-information data is huge and as the years pass will grow even more. It is certain, therefore, that we live in the information society, where the transformation of data into information needed to drive the conversion of information into knowledge. This created the need to process this data and turn them into useful information that will help in decision making. The mining techniques are an important tool that helps us to draw knowledge from large volumes of data and if we think that all this can be combined with statistical methods then we can easily retrieve information. The disparate disciplines such as statistics, machine learning, information theory and computational procedures, has created a new science with powerful tools. This science is called "Data Mining (DM)» and is part of the 'Knowledge Discovery from Databases ». The tools of DM are the algorithms that are trying to find useful and understandable patterns in data. The main objective of this thesis is the concentration of basic algorithms and methods chosen and cleanse data, recognize patterns, optimize a management system and clustering data. Will emphasize algorithms that are suitable for time economic data. Besides recording the methods and applications of data mining and KDD, we apply clustering techniques to a data set, which includes financial data from three different categories: price-cap stock index Nasdaq, the timeless rate Euro / dollar and the configuration of oil prices / per barrel in international markets. This paper is divided into five chapters: Introduction, theoretical background, methodology, implementation of practical application and conclusions. In Chapter 1, we make a first acquaintance with the Mining Data, in Chapter 2 is the literature review and presented in detail all the theoretical background of the methods used. Methodologies presented in Chapter 3 (mining methods for clustering, classification and prediction) used for the study, while the next chapter presents a practical application of the above as a result of these methodologies. Finally, Chapter 5 presents some conclusions can be drawn from the implementation of the practice.This paper aims to highlight the relationship that can exist between economic science and that of Artificial Intelligence, focusing mainly on whether the latter can provide solutions to key issues, problems and challenges presented in today's economic environment . The means to achieve this objective are the technical Data Mining, which in Greek as term, rendered as Technical Data Mining. For the realization of this work, as sources used many scientific books related to the Economy, Finance, Artificial Intelligence and methods Data Mining, the Multicriteria Classification Techniques and Statistics. The result from the combination of the above will be presented in the pages that follow.
148

Predictive Gaussian Classification of Functional MRI Data

Yourganov, Grigori 14 January 2014 (has links)
This thesis presents an evaluation of algorithms for classification of functional MRI data. We evaluated the performance of probabilistic classifiers that use a Gaussian model against a popular non-probabilistic classifier (support vector machine, SVM). A pool of classifiers consisting of linear and quadratic discriminants, linear and non-linear Gaussian Naive Bayes (GNB) classifiers, and linear SVM, was evaluated on several sets of real and simulated fMRI data. Performance was measured using two complimentary metrics: accuracy of classification of fMRI volumes within a subject, and reproducibility of within-subject spatial maps; both metrics were computed using split-half resampling. Regularization parameters of multivariate methods were tuned to optimize the out-of-sample classification and/or within-subject map reproducibility. SVM showed no advantage in classification accuracy over Gaussian classifiers. Performance of SVM was matched by linear discriminant, and at times outperformed by quadratic discriminant or nonlinear GNB. Among all tested methods, linear and quadratic discriminants regularized with principal components analysis (PCA) produced spatial maps with highest within-subject reproducibility. We also demonstrated that the number of principal components that optimizes the performance of linear / quadratic discriminants is sensitive to the mean magnitude, variability and connectivity of simulated active signal. In real fMRI data, this number is correlated with behavioural measures of post-stroke recovery , and, in a separate study, with behavioural measures of self-control. Using the data from a study of cognitive aspects of aging, we accurately predicted the age group of the subject from within-subject spatial maps created by our pool of classifiers. We examined the cortical areas that showed difference in recruitment in young versus older subjects; this difference was demonstrated to be primarily driven by more prominent recruitment of task-positive network in older subjects. We conclude that linear and quadratic discriminants with PCA regularization are well-suited for fMRI data classification, particularly for within-subject analysis.
149

Visual exploratory analysis of large data sets : evaluation and application

Lam, Heidi Lap Mun 11 1900 (has links)
Large data sets are difficult to analyze. Visualization has been proposed to assist exploratory data analysis (EDA) as our visual systems can process signals in parallel to quickly detect patterns. Nonetheless, designing an effective visual analytic tool remains a challenge. This challenge is partly due to our incomplete understanding of how common visualization techniques are used by human operators during analyses, either in laboratory settings or in the workplace. This thesis aims to further understand how visualizations can be used to support EDA. More specifically, we studied techniques that display multiple levels of visual information resolutions (VIRs) for analyses using a range of methods. The first study is a summary synthesis conducted to obtain a snapshot of knowledge in multiple-VIR use and to identify research questions for the thesis: (1) low-VIR use and creation; (2) spatial arrangements of VIRs. The next two studies are laboratory studies to investigate the visual memory cost of image transformations frequently used to create low-VIR displays and overview use with single-level data displayed in multiple-VIR interfaces. For a more well-rounded evaluation, we needed to study these techniques in ecologically-valid settings. We therefore selected the application domain of web session log analysis and applied our knowledge from our first three evaluations to build a tool called Session Viewer. Taking the multiple coordinated view and overview + detail approaches, Session Viewer displays multiple levels of web session log data and multiple views of session populations to facilitate data analysis from the high-level statistical to the low-level detailed session analysis approaches. Our fourth and last study for this thesis is a field evaluation conducted at Google Inc. with seven session analysts using Session Viewer to analyze their own data with their own tasks. Study observations suggested that displaying web session logs at multiple levels using the overview + detail technique helped bridge between high-level statistical and low-level detailed session analyses, and the simultaneous display of multiple session populations at all data levels using multiple views allowed quick comparisons between session populations. We also identified design and deployment considerations to meet the needs of diverse data sources and analysis styles.
150

Sports Supplements and Risk: Perceptions of Young Male Supplement Users

Bowman, Carolyn 26 August 2011 (has links)
The purpose of this study was to describe the experience of using sports supplements, from a risk theory perspective. Thematic analysis was used to conduct a secondary data analysis on 18 interviews done with young men who were interested in supplements. Participants were recruited from Guelph area commercial gyms and campus athletic centres. Participants used supplements because they worked out and wanted to gain muscle. Supplements, and especially protein, were part of a common knowledge among people who worked out. Participants evaluated whether supplements were ‘worth it’ by evaluating the cost, efficacy, and safety of supplements. Participants altered their behaviour in response to their perception of the riskiness of supplements, in order to feel safe. Many participants valued information from health professionals but found it lacking. Most information was available from sources that participants did not feel were credible.

Page generated in 0.0611 seconds