• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36348
  • 19912
  • 1440
  • 971
  • 509
  • 296
  • 182
  • 155
  • 113
  • 88
  • 69
  • 51
  • 51
  • 50
  • 47
  • Tagged with
  • 60680
  • 52734
  • 52612
  • 52610
  • 8159
  • 5114
  • 4988
  • 4518
  • 4293
  • 3903
  • 3711
  • 3234
  • 3183
  • 2818
  • 2680
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

An empirical approach to evaluating sufficient similarity utilization of euclidean distance as a similarity measure /

Marshall, Scott. January 1900 (has links)
Thesis (Ph. D.)--Virginia Commonwealth University, 2010. / Prepared for: Dept. of Biostatistics. Title from resource description page . Includes bibliographical references. Unavailable until May 27, 2015.
312

Application of a multidisciplinary approach to the systematics of Acomys (Rodentia: Muridae) from northern Tanzania

Mgode, Georgies Frank. January 2007 (has links)
Thesis (M.Sc.)(Zoology)--University of Pretoria, 2007. / Includes summary. Includes bibliographical references. Available on the Internet via the World Wide Web.
313

Raisonnement classificatoire dans une représentation à objets multi-points de vue /

Mariño Drews, Olga. January 1900 (has links)
Th. univ.--Informatique--Grenoble 1, 1993. / 1994 d'après la déclaration du dépôt légal. Résumé en français et en anglais. Notes bibliogr.
314

TAXONOMY OF THE POCKET GOPHER, THOMOMYS BAILEYI

Lane, James Dale, 1937- January 1965 (has links)
No description available.
315

How Hollywood got its groove back : reimagining the mass audience through the Motion Picture Association of America's rating system

Sandler, Kevin Scott January 2001 (has links)
This dissertation explores how Hollywood, in the years following the creation of the Classification and Ratings Administration (CARA) in 1968, reimagined the "mass audience" in an age of audience fragmentation. Building on Richard Maltby's suggestion that the rating system did not cause "the majors to alter their fundamental assumptions about the nature of film as a commercial commodity," I will show how the industry successfully continued to portray itself as a producer of universal entertainment for an undifferentiated audience. Guaranteeing that all CARA certified films would be rendered "respectable" for its audiences was the key tactic in this strategy. The abandonment of the X through the cooperation of large, vertically aligned and integrated companies has ensured an unusual industrial stability under the mediating regulatory practices of CARA for almost thirty years. In the process of detailing how the studios successfully anticipated and accommodated CARA's requirements for what I term the "incontestable R"---in theory a "restricted" category, but in fact a category permitting all-ages consumption---I explore the consequences that arranging pictures for an R has for Hollywood production practices. By examining the ill-fated attempts to restore the adult category with the NC-17 rating in 1990 and Showgirls in 1995, I demonstrate how the continuing stigmatization of the NC-17 serves the economic interests of its large member distributors at the expense of small independent or unaffiliated distributors and exhibitors.
316

The Edinburgh/Durham Southern Galaxy Catalogue : an investigation into the large-scale structure of the Universe

Heydon-Dumbleton, Neil H. January 1989 (has links)
This thesis describes the construction and application of the Edinburgh/Durham Southern Galaxy Catalogue, which is a database of information on ~ 1-5 million galaxies, covering ~ 2000 deg2 of the South Galactic Cap. This catalogue is based on objective image detection and classification techniques, rather than the visual searches of photographic plates used in previous galaxy catalogues. The raw data for this project are digitised scans of 60 ESO/SERC Atlas plates using the COSMOS high-speed plate measuring machine. The quality controls employed during the production of the ESO/SERCAtlas, ensures that it is deeper and more uniform than set of plates used previously to construct a galaxy catalogue. The COSMOS machine objectively detects and parameterises ~ 2 X 105 images on each photographic plate. Image deblending software has been introduced to ensure the accurate detection and parameterisation of images in the crowded regions of compact clusters. Star-galaxy classification and photometric calibration techniques have been investigated and optimised to reduce and quantify any systematic variations that could introduce spurious structure. A classification algorithm has been used to automatically classify images over the whole range of magnitudes in the survey. Accurate intra-plate pho?tometry is possible for galaxies, because a COSMOS magnitude can be defined which is linearly related to the object magnitude. Inter-plate calibration is carried out using CCD galaxy sequences for every second field in the catalogue. Unlike global calibration techniques used previously, this arrangement of CCD?s prevents propagation of calibration errors. Statistics are given to show that the final catalogue of galaxies will be > 95% complete for bj < 20-0 with < 10% contamination by stars and that the point- to-point variation in galaxy number density, due to the combined residual systematic errors in classification and calibration, is ~ 8%. To date a mosaic of 35 plates covering a contiguous region of 1000 deg2 has been constructed. The large-scale galaxy distribution, seen in maps of this data, is dominated by two large supercluster complexes separated by ~ 15? ? 20?. Several filamentary arc structures can also be seen, with clusters distributed along them. The number- magnitude counts derived from this database show significant deviation from a no?evolution model at bj > 18-75. The variation in the amplitude of the counts across the survey cannot be accounted for by systematic variations in limiting magnitude and so is probably due large-scale clustering of galaxies. The two-point correlation functions calculated for this 35-plate mosaic confirm a break from power-law, though at larger scales (~ 20h-1 Mpc ) than previously estimated. In the context of current theories of galaxy formation, models involving standard cold dark matter with extra large-scale power would still seem to be excluded.
317

Perceptions of innovations : exploring and developing innovation classification

Adams, Richard January 2003 (has links)
The capacity to innovate is commonly regarded as a key response mechanism, a critical organisational competence for success, even survival, for organisations operating in turbulent conditions. Understanding how innovation works, therefore, continues to be a significant agenda item for many researchers. Innovation, however, is generally recognised to be a complex and multi-dimensional phenomenon. Classificatory approaches have been used to provide conceptual frameworks for descriptive purposes and to help better understand innovation. Further, by the facility of pattern recognition, classificatory approaches also attempt to elevate theorising from the specific and contextual to something more abstract and generalisable. Over the last 50 years researchers have sought to explain variance in innovation activities and processes, adoption and diffusion patterns and, performance outcomes in terms of these different ‘types’ of innovation. Three generic approaches to the classification of innovations can be found in the literature (innovation newness, area of focus and attributes). In this research, several limitations of these approaches are identified: narrow specification, inconsistent application across studies and, indistinct and permeable boundaries between categories. One consequence is that opportunities for cumulative and comparative research are hampered. The assumption underpinning this research is that, given artefact multidimensionality, it is not unreasonable to assume that we might expect to see the diversity of attributes being patterned into distinct configurations. In a mixed-method study, comprising of three empirical phases, the innovation classification problem is addressed through the design, testing and application of a multi-dimensional framework of innovation, predicated on perceived attributes. Phase I is characterised by an iterative process, in which data from four case studies of successful innovation in the UK National Health Service are synthesised with those drawn from an extensive thematic interrogation of the literature, in order to develop the framework. The second phase is concerned with identifying whether or not innovations configure into discrete, identifiable types based on the multidimensional conceptualisation of innovation artefact, construed in terms of innovation attributes. The framework is operationalised in the form of a 56-item survey instrument, administered to a sample consisting of 310 different innovations. 196 returns were analysed using methods developed in biological systematics. From this analysis, a taxonomy consisting of three discrete types (type 1, type 2 and type 3 innovations) emerges. The taxonomy provides the basis for additional theoretical development. In phase III of the research, the utility of the taxonomy is explored in a qualitative investigation of the processes underpinning the development of exemplar cases of each of the three innovation types. This research presents an integrative approach to the study of innovation based on the attributes of the innovation itself, rather than its effects. Where the challenge is to manage multiple discrete data combinations along a number of dimensions, the configurational approach is especially relevant and can provide a richer understanding and description of the phenomenon of interest. Whilst none of the dimensions that comprise the proposed framework are new in themselves, what is original is the attempt to deal with them simultaneously in order that innovations may be classified according to differences in the way in which their attributes configure. This more sensitive classification of the artefact permits a clearer exploration of relationship issues between the innovation, its processes and outcomes.
318

Predictive analytics for classification of immigration visa applications: a discriminative machine learning approach

Vegesana, Sharmila January 1900 (has links)
Master of Science / Department of Computer Science / William Hsu / This work focuses on the data science challenge problem of predicting the decision for past immigration visa applications using supervised machine learning for classification. I describe an end-to-end approach that first prepares historical data for supervised inductive learning, trains various discriminative models, and evaluates these models using simple statistical validation methods. The H-1B visa allows employers in the United States to temporarily employ foreign nationals in various specialty occupations that require a bachelor’s degree or higher in the specific specialty, or its equivalents. These specialty occupations may often include, but are not limited to: medicine, health, journalism, and areas of science, technology, engineering and mathematics (STEM). Every year the United States Citizenship and Immigration Service (USCIS) grants a current maximum of 85,000 visas, even though the number of applicants surpasses this amount by a huge difference and this selection process is claimed to be a lottery system. The dataset used for this experimental research project contains all the petitions made for this visa cap from the year 2011 to 2016. This project aims at using discriminative machine learning techniques to classify these petitions and predict the “case status” of each petition based on various factors. Exploratory data analysis is also done to determine the top employers, the locations which most appeal for foreign nationals under this visa cap and the job roles which have the highest number of foreign workers. I apply supervised inductive learning algorithms such as Gaussian Naïve Bayes, Logistic Regression, and Random Forests to identify the most probable factors for H-1B visa certifications and compare the results of each to determine the best predictive model for this testbed.
319

Machine Learning Strategies for Large-scale Taxonomies / Strategies d'apprentissage pour la classification dans les grandes taxonomies

Babbar, Rohit 17 October 2014 (has links)
À l'ère de Big Data, le développement de modèles d'apprentissage machine efficaces et évolutifs opérant sur des Tera-Octets de données est une nécessité. Dans cette thèse, nous étudions un cadre d'apprentissage machine pour la classification hiérarchique à large échelle. Cette analyse comprend l'étude des défis comme la complexité d'entraînement des modèles ainsi que leur temps de prédiction. Dans la première partie de la thèse, nous étudions la distribution des lois de puissance sous-jacente à la création des taxonomies à grande échelle. Cette étude permet de dériver des bornes sur la complexité spatiale des classifieurs hiérarchiques. L'exploitation de ce résultat permet alors le développement des modèles efficaces pour les classes distribuées selon une loi de puissance. Nous proposons également une méthode efficace pour la sélection de modèles pour des classifieurs multi-classes de type séparateurs à vaste marge ou de la régression logistique. Dans une deuxième partie, nous étudions le problème de la classification hiérarichique contre la classification plate d'un point de vue théorique. Nous dérivons une borne sur l'erreur de généralisation qui permet de définir les cas où la classification hiérarchique serait plus avantageux que la classification plate. Nous exploitons en outre les bornes développées pour proposer deux méthodes permettant adapter une taxonomie donnée de catégories à une taxonomies de sorties qui permet d'atteindre une meilleure performance de test. / In the era of Big Data, we need efficient and scalable machine learning algorithms which can perform automatic classification of Tera-Bytes of data. In this thesis, we study the machine learning challenges for classification in large-scale taxonomies. These challenges include computational complexity of training and prediction and the performance on unseen data. In the first part of the thesis, we study the underlying power-law distribution in large-scale taxonomies. This analysis then motivates the derivation of bounds on space complexity of hierarchical classifiers. Exploiting the study of this distribution further, we then design classification scheme which leads to better accuracy on large-scale power-law distributed categories. We also propose an efficient method for model-selection when training multi-class version of classifiers such as Support Vector Machine and Logistic Regression. Finally, we address another key model selection problem in large scale classification concerning the choice between flat versus hierarchical classification from a learning theoretic aspect. The presented generalization error analysis provides an explanation to empirical findings in many recent studies in large-scale hierarchical classification. We further exploit the developed bounds to propose two methods for adapting the given taxonomy of categories to output taxonomies which yield better test accuracy when used in a top-down setup.
320

Studies of information use by engineering designers and the development of strategies to aid in its classification and retrieval

Lowe, Alistair January 2002 (has links)
This thesis presents an approach for supporting the information access requirements of engineering designers. Technical and cultural factors are increasing the quantity of information that is available to designers. As a consequence, they require improved tools not just to retrieve this information but also to allow it to be organised and classified into meaningful structures to assist in its management. The research has been undertaken from two interrelated standpoints. The first focused on empirical studies of the information access requirements of practising designers. The second concerned the development, based on the key findings from the initial studies, of classification structures for aerospace design information and their incorporation in a computer-based information system. The empirical studies of designers were carried out in two separate stages. The first involved the characterisation of the information usage of a range of engineering designers with different backgrounds. The results indicated important differences in the usage and storage of information between designers. The second study examined documents used by practising designers. From this, a number of core classification scheme types were identified that allow information to be organised from a variety of user perspectives. The results of the empirical studies informed the development of a novel information system based on a combination of. (i) faceted-like, automatic, non-mutually exclusive classification principles and (ii) a hybrid browsing approach that `prunes' the browsable classification scheme, according to concept selections made by the user. The system overcomes some of the usual problems of browsing classification structures and allows the inference of linked relationships between different classification categories. This represents a powerful feature that is beyond the capabilities of existing search approaches. The benefits of the system, when applied to a number of typical engineering information search scenarios, are discussed followed by an evaluation of the approach. Finally, a number of conclusions and suggestions for future research are suggested.

Page generated in 0.3604 seconds