Spelling suggestions: "subject:"classification"" "subject:"classsification""
1 |
Modeling the decision making mind: Does form follow function?Jarecki, Jana Bianca 07 December 2017 (has links)
Die Verhaltenswissenschaften betrachten menschliche Entscheidungsprozesse aus zwei komplementären Perspektiven: Form und Funktion. Formfragen behandeln wie Denkprozesse ablaufen, Funktionsfragen behandeln, welche Ziele das resultierende Verhalten erfüllt. Die vorliegende Dissertation argumentiert für die Integration von Form und Funktion.
Ein Schritt zur Integration von Form und Funktion besteh darin, Prozessmodelle aus der Kognitionspsychologie in die evolutionäre Psychologie und Verhaltensbiologie (welche sich häufig mit Funktionsfragen befassen) einzuführen. Studie 1 untersucht die Eigenschaften kognitiver Prozessmodelle. Ich schlage ein Rahmenmodell für allgemeine kognitive Prozessmodelle vor, mit Hilfe dessen Prozessmodelle entwickelt werden können.
In Studie 2 untersuche ich Klassifikation aus Perspektive der Form und Funktion. Verhalten sich Menschen gemäss einer statistischen Annahme, die sich in der Informatik als robust gegenüber ihrer Verletzung herausstellte? Daten aus zwei Lernexperimenten und Modellierung mittels eines neuen probabilistischen Lernmodells zeigen, dass Menschen zu Beginn des Lernprozesses gemäß dem statistischen Prinzip der klassenkonditionalen Unabhängigkeit kategorisieren.
In Studie 3 geht es um Risikoentscheidungen aus der Perspektive der Form und Funktion. Sind Informationsverarbeitungsprozesse abhängig von der Zielgröße der Entscheidung? Ich messe Prozess- und Verhaltensindikatoren in zehn Risikodomänen welche die evolutionären Ziele wiederspiegeln. Im Ergebnis zeigt sich, dass Risikoeinstellungen domänenspezifisch sind. Insbesondere sind Frauen nicht universell risiko-averser als Männer. Auf der Prozessebene hat die Valenz der entscheidungsrelevanten Argumente geringeren Einfluss auf die Domänenunterschiede als die am häufigsten genannten Aspekte für/gegen das Risikoverhalten. / The behavioral sciences investigate human decision processes from two complementary perspectives: form and function. Formal questions include the processes that lead to decisions, functional aspects include the goals which the resulting behavior meets. This dissertation argues for the integration of form and functional questions.
One step towards a form-function integration is introducing cognitive process models into evolutionary psychology and behavioral biology (which are mostly asking about the goals of behavior). Study 1 investigates the properties of cognitive process models. I suggest the first general framework for building cognitive process models.
In study 2 I investigate human category learning from a functional and form centered perspective. Do humans, when learning a novel categorization task, follow a statistical principle which was been shown to perform the goals of correct classification robustly even in the face of violations of the underlying assumption? Data from two learning experiments and cognitive modeling with a novel probabilistic learning model show that humans start classifying by following the statistical principle of class-conditional independence of features.
Study 3 investigates risk attitudes from the perspective of form and function. Does the information people process relate to the goals of risky behavior? I measure process- and behavioral indicators in ten domains of risks which represent different evolutionary goals. The results show that not only do risk attitudes differ across domains, but also that females are not universally less risk taking than males. Further, on the process level, the valence of the aspects related to perceived risks is less related to peoples’ risk propensities compared to the most frequently mentioned aspects.
|
2 |
ANALYSIS OF LATENT SPACE REPRESENTATIONS FOR OBJECT DETECTIONAshley S Dale (8771429) 03 September 2024 (has links)
<p dir="ltr">Deep Neural Networks (DNNs) successfully perform object detection tasks, and the Con- volutional Neural Network (CNN) backbone is a commonly used feature extractor before secondary tasks such as detection, classification, or segmentation. In a DNN model, the relationship between the features learned by the model from the training data and the features leveraged by the model during test and deployment has motivated the area of feature interpretability studies. The work presented here applies equally to white-box and black-box models and to any DNN architecture. The metrics developed do not require any information beyond the feature vector generated by the feature extraction backbone. These methods are therefore the first methods capable of estimating black-box model robustness in terms of latent space complexity and the first methods capable of examining feature representations in the latent space of black box models.</p><p dir="ltr">This work contributes the following four novel methodologies and results. First, a method for quantifying the invariance and/or equivariance of a model using the training data shows that the representation of a feature in the model impacts model performance. Second, a method for quantifying an observed domain gap in a dataset using the latent feature vectors of an object detection model is paired with pixel-level augmentation techniques to close the gap between real and synthetic data. This results in an improvement in the model’s F1 score on a test set of outliers from 0.5 to 0.9. Third, a method for visualizing and quantifying similarities of the latent manifolds of two black-box models is used to correlate similar feature representation with increase success in the transferability of gradient-based attacks. Finally, a method for examining the global complexity of decision boundaries in black-box models is presented, where more complex decision boundaries are shown to correlate with increased model robustness to gradient-based and random attacks.</p>
|
Page generated in 0.0999 seconds