• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 33
  • 11
  • 7
  • 5
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 200
  • 29
  • 25
  • 21
  • 20
  • 17
  • 16
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Detecting nighttime fire combustion phase by hybrid application of visible and infrared radiation from Suomi NPP VIIRS

Roudini, Sepehr 01 August 2019 (has links)
An accurate estimation of biomass burning emissions is in part limited by the lack of knowledge of fire burning phase (smoldering/flaming). In recent years, several fire detection products have been developed to provide information of fire radiative power (FRP), location, size, and temperature of fire pixels, but no information regarding fire burning phase is retrieved. The Day-Night band (DNB) aboard Visible Infrared Imaging Radiometer Suite (VIIRS) is sensitive to visible light from flaming fires in the night. In contrast, VIIRS 4 µm moderate resolution band #13 (M13), though capable to detect fires at all phases, has no direct sensitivity for discerning fire phase. However, the hybrid usage of VIIRS DNB and M-bands data is hampered due to their different scanning technology and spatial resolution. In this study, we present a novel method to rapidly and accurately resample DNB pixel radiances to M-band pixels’ footprint that is based on DNB and M-band’s respective characteristics in their onboard schemes for detector aggregation and bow-tie effect removals. Subsequently, the visible energy fraction (VEF) as an indicator of fire burning phase is introduced and is calculated as the ratio of visible light power (VLP) and FRP for each fire pixel retrieved from VIIRS 750 m active fire product. A global distribution of VEF values, and thereby the fire phase, is quantitatively obtained, showing mostly smoldering wildfires such as peatland fires (with smaller VEF values) in Indonesia, flaming wildfires (with larger VEF values) over grasslands and savannahs in sub-Sahel region, and gas fares with largest VEF values in the Middle East. VEF is highly correlated with modified combustion efficiency (MCE) for different land cover types or regions. These results together with a case study of the 2018 California Campfire show that the VEF has the potential to be an indicator of fire combustion phase for each fire pixel, appropriate for estimating emission factors at the satellite pixel level.
112

台灣血液透析患者的個人化生活品質:以SEIQoL-DW為測量工具 / Taiwan hemodialysis patients’ individual quality of life:assessed by SEIQoL-DW

羅一哲, Luo, Yi Jhe Unknown Date (has links)
本研究目的為使用個人化生活品質評量表直接權重版(SEIQoL-DW),評估國內血液透析患者的個人化生活品質,探討對其特別重要的生活品質向度與相關影響因素,此外,並以Locke(1969, 1976)(引自Wu & Yao,2006b)的「情感間距假設」、Calman(1984)的期望理論與Wu(2009)提出的移轉傾向指標為基礎,探討SEIQoL-DW重要性評估程序的應用價值與潛在臨床應用指標。研究對象以立意取樣,自台北市松山區某私人洗腎中心募集57名血液透析患者、以及台北市文山區一般社區成人60名,研究工具包含SEIQoL-DW、生活滿意度量表、焦慮與憂鬱評量表、自編人口背景/疾病變項問卷,統計方法含描述性統計、卡方檢定、相關分析、平均數差異檢定與迴歸分析。研究結果發現,血液透析患者最常列舉的重要生活向度為健康(77%),其次依序為家庭(72%)、經濟(65%)、人際關係(53%)、休閒活動(49%)、因應/正向態度(23%)、工作/學業(21%)、心理健康(16%)、生活條件(16%)、靈性/信仰(9%)、角色功能(9%)、其他(9%),血液透析患者在各向度提名百分比與一般成人組未有顯著差異,但血液透析患者對健康向度的現況評比顯著較低,對健康的現況-期望落差也顯著較高,共病數與血液透析患者個人化生活品質指標有顯著負相關;而SEIQoL-DW項目重要性對項目現況分數與整體滿意度之間的關係不具調節效果,SEIQoL-DW的權重程序未能提升對整體滿意度的解釋力,此外,自項目重要性與現況-想望落差所得移轉傾向指標,和整體生活滿意度、SEIQoL-DW現況平均數亦未有一致的相關性或獨特解釋力。儘管本研究不支持SEIQoL-DW權重程序或衍生指標的助益,但若從個人脈絡來看,向度重要性仍可協助探索個案生活目標重要順序,有其臨床醫療應用價值;最後,SEIQoL-DW個人現況-想望落差分數、現況平均數、以及整體生活滿意度、負向情緒彼此有顯著關聯性與解釋力,在個案生活滿意度、負向情緒評估或介入方案中,具有成為臨床應用指標的潛力。 / The primary purpose of this thesis was using the Schedule for the Evaluation of Individual Quality of Life-Direct Weighting (SEIQoL-DW) to explore hemodialysis patients’ Individual Quality of Life and relative determinants. And further using affect-range hypothesis (Locke, 1969, 1976), expection theory (Calman,1984) and shifting tendency Index(Wu, 2009) as framework to evaluate SEIQoL-DW weighting procedure’s efficiency and potential clinical application variables. 57 hemodialysis patients and 60 counterparts was recruited from Taipei city, IQoL was assessed by SEIQoL-DW, the general life satisfaction and anxiety /depression statement was chosen as criterian variables. In the analysis, twelve quality of life domains were identified. Health(77%), family(72%), finance(65%), relations(53%) and leisure time(49%) were the most prominent quality of life domains of hemodialysis patients, although the domains nominated percentage and importance rating didn’t differ between groups, hemodialysis patients’ health domain status and have-want discrepancy were worse than the counterparts. Among the investgated variables, only the comorbidity had negative correlation with hemodialysis patients’ IQoL. The result didn’t support the SEIQoL-DW’s weighting procedure and shifting tendency Index had significant efficiency, but the weighing information could still be useful in personal profile context. Finally, the personal have-want discrepancy, satus average, general life satisfaction and anxiety/depression statement have significant relations with each other, thus could be the potential clinical application variables in negative emotion or life satisfaction intervention programs.
113

Photon Counting X-ray Detector Systems

Norlin, Börje January 2005 (has links)
<p>This licentiate thesis concerns the development and characterisation of X-ray imaging detector systems. “Colour” X-ray imaging opens up new perspectives within the fields of medical X-ray diagnosis and also in industrial X-ray quality control. The difference in absorption for different “colours” can be used to discern materials in the object. For instance, this information might be used to identify diseases such as brittle-bone disease. The “colour” of the X-rays can be identified if the detector system can process each X-ray photon individually. Such a detector system is called a “single photon processing” system or, less precise, a “photon counting system”.</p><p>With modern technology it is possible to construct photon counting detector systems that can resolve details to a level of approximately 50 µm. However with such small pixels a problem will occur. In a semiconductor detector each absorbed X-ray photon creates a cloud of charge which contributes to the picture achieved. For high photon energies the size of the charge cloud is comparable to 50 µm and might be distributed between several pixels in the picture. Charge sharing is a key problem since, not only is the resolution degenerated, but it also destroys the “colour” information in the picture.</p><p>The problem involving charge sharing which limits “colour” X-ray imaging is discussed in this thesis. Image quality, detector effectiveness and “colour correctness” are studied on pixellated detectors from the MEDIPIX collaboration. Characterisation measurements and simulations are compared to be able to understand the physical processes that take place in the detector. Simulations can show pointers for the future development of photon counting X-ray systems. Charge sharing can be suppressed by introducing 3D-detector structures or by developing readout systems which can correct the crosstalk between pixels.</p>
114

Bayesian risk management : "Frequency does not make you smarter"

Fucik, Markus January 2010 (has links)
Within our research group Bayesian Risk Solutions we have coined the idea of a Bayesian Risk Management (BRM). It claims (1) a more transparent and diligent data analysis as well as (2)an open-minded incorporation of human expertise in risk management. In this dissertation we formulize a framework for BRM based on the two pillars Hardcore-Bayesianism (HCB) and Softcore-Bayesianism (SCB) providing solutions for the claims above. For data analysis we favor Bayesian statistics with its Markov Chain Monte Carlo (MCMC) simulation algorithm. It provides a full illustration of data-induced uncertainty beyond classical point-estimates. We calibrate twelve different stochastic processes to four years of CO2 price data. Besides, we calculate derived risk measures (ex ante/ post value-at-risks, capital charges, option prices) and compare them to their classical counterparts. When statistics fails because of a lack of reliable data we propose our integrated Bayesian Risk Analysis (iBRA) concept. It is a basic guideline for an expertise-driven quantification of critical risks. We additionally review elicitation techniques and tools supporting experts to express their uncertainty. Unfortunately, Bayesian thinking is often blamed for its arbitrariness. Therefore, we introduce the idea of a Bayesian due diligence judging expert assessments according to their information content and their inter-subjectivity. / Die vorliegende Arbeit befasst sich mit den Ansätzen eines Bayes’schen Risikomanagements zur Messung von Risiken. Dabei konzentriert sich die Arbeit auf folgende zentrale Fragestellungen: (1) Wie ist es möglich, transparent Risiken zu quantifizieren, falls nur eine begrenzte Anzahl an geeigneten historischen Beobachtungen zur Datenanalyse zur Verfügung steht? (2) Wie ist es möglich, transparent Risiken zu quantifizieren, falls mangels geeigneter historischer Beobachtungen keine Datenanalyse möglich ist? (3) Inwieweit ist es möglich, Willkür und Beliebigkeit bei der Risikoquantifizierung zu begrenzen? Zur Beantwortung der ersten Frage schlägt diese Arbeit die Anwendung der Bayes’schen Statistik vor. Im Gegensatz zu klassischen Kleinste-Quadrate bzw. Maximum-Likelihood Punktschätzern können Bayes’sche A-Posteriori Verteilungen die dateninduzierte Parameter- und Modellunsicherheit explizit messen. Als Anwendungsbeispiel werden in der Arbeit zwölf verschiedene stochastische Prozesse an CO2-Preiszeitreihen mittels des effizienten Bayes’schen Markov Chain Monte Carlo (MCMC) Simulationsalgorithmus kalibriert. Da die Bayes’sche Statistik die Berechnung von Modellwahrscheinlichkeiten zur kardinalen Modellgütemessung erlaubt, konnten Log-Varianz Prozesse als mit Abstand beste Modellklasse identifiziert werden. Für ausgewählte Prozesse wurden zusätzlich die Auswirkung von Parameterunsicherheit auf abgeleitete Risikomaße (ex-ante/ ex-post Value-at-Risks, regulatorische Kapitalrücklagen, Optionspreise) untersucht. Generell sind die Unterschiede zwischen Bayes’schen und klassischen Risikomaßen umso größer, je komplexer die Modellannahmen für den CO2-Preis sind. Überdies sind Bayes’sche Value-at-Risks und Kapitalrücklagen konservativer als ihre klassischen Pendants (Risikoprämie für Parameterunsicherheit). Bezüglich der zweiten Frage ist die in dieser Arbeit vertretene Position, dass eine Risikoquantifizierung ohne (ausreichend) verlässliche Daten nur durch die Berücksichtigung von Expertenwissen erfolgen kann. Dies erfordert ein strukturiertes Vorgehen. Daher wird das integrated Bayesian Risk Analysis (iBRA) Konzept vorgestellt, welches Konzepte, Techniken und Werkzeuge zur expertenbasierten Identifizierung und Quantifizierung von Risikofaktoren und deren Abhängigkeiten vereint. Darüber hinaus bietet es Ansätze für den Umgang mit konkurrierenden Expertenmeinungen. Da gerade ressourceneffiziente Werkzeuge zur Quantifizierung von Expertenwissen von besonderem Interesse für die Praxis sind, wurden im Rahmen dieser Arbeit der Onlinemarkt PCXtrade und die Onlinebefragungsplattform PCXquest konzipiert und mehrfach erfolgreich getestet. In zwei empirischen Studien wurde zudem untersucht, inwieweit Menschen überhaupt in der Lage sind, ihre Unsicherheiten zu quantifizieren und inwieweit sie Selbsteinschätzungen von Experten bewerten. Die Ergebnisse deuten an, dass Menschen zu einer Selbstüberschätzung ihrer Prognosefähigkeiten neigen und tendenziell hohes Vertrauen in solche Experteneinschätzungen zeigen, zu denen der jeweilige Experte selbst hohes Zutrauen geäußert hat. Zu letzterer Feststellung ist jedoch zu bemerken, dass ein nicht unbeträchtlicher Teil der Befragten sehr hohe Selbsteinschätzung des Experten als negativ ansehen. Da der Bayesianismus Wahrscheinlichkeiten als Maß für die persönliche Unsicherheit propagiert, bietet er keinerlei Rahmen für die Verifizierung bzw. Falsifizierung von Einschätzungen. Dies wird mitunter mit Beliebigkeit gleichgesetzt und könnte einer der Gründe sein, dass offen praktizierter Bayesianismus in Deutschland ein Schattendasein fristet. Die vorliegende Arbeit stellt daher das Konzept des Bayesian Due Diligence zur Diskussion. Es schlägt eine kriterienbasierte Bewertung von Experteneinschätzungen vor, welche insbesondere die Intersubjektivität und den Informationsgehalt von Einschätzungen beleuchtet.
115

Difficulties to Read and Write Under Lateral Vibration Exposure : Contextual Studies Of Train Passengers Ride Comfort

Sundström, Jerker January 2006 (has links)
Many people use the train both as a daily means of transport as well as a working place to carry out activities such as reading or writing. There are, however, several important factors in this environment that will hamper good performance of such activities. Some of the main sources of disturbance, apart form other train passengers, are noise and vibrations generated from the train itself. Although there are standards available for evaluation of ride comfort in vehicles none of them consider the effects that vibrations have on particular passengers' activities. To address these issues, three different studies were conducted to investigate how low frequency lateral vibrations influence the passengers' ability to read and write onboard trains. The first study was conducted on three types of Inter-Regional trains during normal service and included both a questionnaire survey and vibration measurements. Two proceeding laboratory studies were conducted in a train mock-up where the perceived difficulty of reading and writing was evaluated for different frequencies and amplitudes. To model and clarify how vibrations influence the processes of reading and writing the fundamentals of Human Activity Theory was used as a framework in this thesis. In the field study about 80% of the passengers were found to be reading at some point during the journey, 25% were writing by hand, and 14% worked with portable computers. The passengers applied a wide range of seated postures for their different activities. According to the standardised measurements, even the trains running on poor tracks showed acceptable levels of vibration. However, when the passengers performed a short written test, over 60 % reported to be disturbed or affected by vibrations and noise in the train. In the laboratory studies it was found that the difficulty in reading and writing is strongly influenced by both vibration frequency and acceleration amplitude. The vibration spectra of real trains were found to correspond well to the frequency characteristics of the rated difficulty. It was also observed that moderate levels of difficulty begin at fairly low vibration levels. Contextual parameters like sitting posture and type of activity also showed strong influence on how vibrations cause difficulty.
116

A Content Based Movie Recommendation System Empowered By Collaborative Missing Data Prediction

Karaman, Hilal 01 July 2010 (has links) (PDF)
The evolution of the Internet has brought us into a world that represents a huge amount of information items such as music, movies, books, web pages, etc. with varying quality. As a result of this huge universe of items, people get confused and the question &ldquo / Which one should I choose?&rdquo / arises in their minds. Recommendation Systems address the problem of getting confused about items to choose, and filter a specific type of information with a specific information filtering technique that attempts to present information items that are likely of interest to the user. A variety of information filtering techniques have been proposed for performing recommendations, including content-based and collaborative techniques which are the most commonly used approaches in recommendation systems. This thesis work introduces ReMovender, a content-based movie recommendation system which is empowered by collaborative missing data prediction. The distinctive point of this study lies in the methodology used to correlate the users in the system with one another and the usage of the content information of movies. ReMovender makes it possible for the users to rate movies in a scale from one to five. By using these ratings, it finds similarities among the users in a collaborative manner to predict the missing ratings data. As for the content-based part, a set of movie features are used in order to correlate the movies and produce recommendations for the users.
117

Text mining : μια νέα προτεινόμενη μέθοδος με χρήση κανόνων συσχέτισης

Νασίκας, Ιωάννης 14 September 2007 (has links)
Η εξόρυξη κειμένου (text mining) είναι ένας νέος ερευνητικός τομέας που προσπαθεί να επιλύσει το πρόβλημα της υπερφόρτωσης πληροφοριών με τη χρησιμοποίηση των τεχνικών από την εξόρυξη από δεδομένα (data mining), την μηχανική μάθηση (machine learning), την επεξεργασία φυσικής γλώσσας (natural language processing), την ανάκτηση πληροφορίας (information retrieval), την εξαγωγή πληροφορίας (information extraction) και τη διαχείριση γνώσης (knowledge management). Στο πρώτο μέρος αυτής της διπλωματικής εργασίας αναφερόμαστε αναλυτικά στον καινούριο αυτό ερευνητικό τομέα, διαχωρίζοντάς τον από άλλους παρεμφερείς τομείς. Ο κύριος στόχος του text mining είναι να βοηθήσει τους χρήστες να εξαγάγουν πληροφορίες από μεγάλους κειμενικούς πόρους. Δύο από τους σημαντικότερους στόχους είναι η κατηγοριοποίηση και η ομαδοποίηση εγγράφων. Υπάρχει μια αυξανόμενη ανησυχία για την ομαδοποίηση κειμένων λόγω της εκρηκτικής αύξησης του WWW, των ψηφιακών βιβλιοθηκών, των ιατρικών δεδομένων, κ.λ.π.. Τα κρισιμότερα προβλήματα για την ομαδοποίηση εγγράφων είναι η υψηλή διαστατικότητα του κειμένου φυσικής γλώσσας και η επιλογή των χαρακτηριστικών γνωρισμάτων που χρησιμοποιούνται για να αντιπροσωπεύσουν μια περιοχή. Κατά συνέπεια, ένας αυξανόμενος αριθμός ερευνητών έχει επικεντρωθεί στην έρευνα για τη σχετική αποτελεσματικότητα των διάφορων τεχνικών μείωσης διάστασης και της σχέσης μεταξύ των επιλεγμένων χαρακτηριστικών γνωρισμάτων που χρησιμοποιούνται για να αντιπροσωπεύσουν το κείμενο και την ποιότητα της τελικής ομαδοποίησης. Υπάρχουν δύο σημαντικοί τύποι τεχνικών μείωσης διάστασης: οι μέθοδοι «μετασχηματισμού» και οι μέθοδοι «επιλογής». Στο δεύτερο μέρος αυτής τη διπλωματικής εργασίας, παρουσιάζουμε μια καινούρια μέθοδο «επιλογής» που προσπαθεί να αντιμετωπίσει αυτά τα προβλήματα. Η προτεινόμενη μεθοδολογία είναι βασισμένη στους κανόνες συσχέτισης (Association Rule Mining). Παρουσιάζουμε επίσης και αναλύουμε τις εμπειρικές δοκιμές, οι οποίες καταδεικνύουν την απόδοση της προτεινόμενης μεθοδολογίας. Μέσα από τα αποτελέσματα που λάβαμε διαπιστώσαμε ότι η διάσταση μειώθηκε. Όσο όμως προσπαθούσαμε, βάσει της μεθοδολογίας μας, να την μειώσουμε περισσότερο τόσο χανόταν η ακρίβεια στα αποτελέσματα. Έγινε μια προσπάθεια βελτίωσης των αποτελεσμάτων μέσα από μια διαφορετική επιλογή των χαρακτηριστικών γνωρισμάτων. Τέτοιες προσπάθειες συνεχίζονται και σήμερα. Σημαντική επίσης στην ομαδοποίηση των κειμένων είναι και η επιλογή του μέτρου ομοιότητας. Στην παρούσα διπλωματική αναφέρουμε διάφορα τέτοια μέτρα που υπάρχουν στην βιβλιογραφία, ενώ σε σχετική εφαρμογή κάνουμε σύγκριση αυτών. Η εργασία συνολικά αποτελείται από 7 κεφάλαια: Στο πρώτο κεφάλαιο γίνεται μια σύντομη ανασκόπηση σχετικά με το text mining. Στο δεύτερο κεφάλαιο περιγράφονται οι στόχοι, οι μέθοδοι και τα εργαλεία που χρησιμοποιεί η εξόρυξη κειμένου. Στο τρίτο κεφάλαιο παρουσιάζεται ο τρόπος αναπαράστασης των κειμένων, τα διάφορα μέτρα ομοιότητας καθώς και μια εφαρμογή σύγκρισης αυτών. Στο τέταρτο κεφάλαιο αναφέρουμε τις διάφορες μεθόδους μείωσης της διάστασης και στο πέμπτο παρουσιάζουμε την δικιά μας μεθοδολογία για το πρόβλημα. Έπειτα στο έκτο κεφάλαιο εφαρμόζουμε την μεθοδολογία μας σε πειραματικά δεδομένα. Η εργασία κλείνει με τα συμπεράσματα μας και κατευθύνσεις για μελλοντική έρευνα. / Text mining is a new searching field which tries to solve the problem of information overloading by using techniques from data mining, natural language processing, information retrieval, information extraction and knowledge management. At the first part of this diplomatic paper we detailed refer to this new searching field, separated it from all the others relative fields. The main target of text mining is helping users to extract information from big text resources. Two of the most important tasks are document categorization and document clustering. There is an increasing concern in document clustering due to explosive growth of the WWW, digital libraries, technical documentation, medical data, etc. The most critical problems for document clustering are the high dimensionality of the natural language text and the choice of features used to represent a domain. Thus, an increasing number of researchers have concentrated on the investigation of the relative effectiveness of various dimension reduction techniques and of the relationship between the selected features used to represent text and the quality of the final clustering. There are two important types of techniques that reduce dimension: transformation methods and selection methods. At the second part of this diplomatic paper we represent a new selection method trying to tackle these problems. The proposed methodology is based on Association Rule Mining. We also present and analyze empirical tests, which demonstrate the performance of the proposed methodology. Through the results that we obtained we found out that dimension has been reduced. However, the more we have been trying to reduce it, according to methodology, the bigger loss of precision we have been taking. There has been an effort for improving the results through a different feature selection. That kind of efforts are taking place even today. In document clustering is also important the choice of the similarity measure. In this diplomatic paper we refer several of these measures that exist to bibliography and we compare them in relative application. The paper totally has seven chapters. At the first chapter there is a brief review about text mining. At the second chapter we describe the tasks, the methods and the tools are used in text mining. At the third chapter we give the way of document representation, the various similarity measures and an application to compare them. At the fourth chapter we refer different kind of methods that reduce dimensions and at the fifth chapter we represent our own methodology for the problem. After that at the sixth chapter we apply our methodology to experimental data. The paper ends up with our conclusions and directions for future research.
118

Charakterisierung mikrostruktureller Gewebeveränderungen bei der sporadischen Creutzfeldt-Jakob-Krankheit durch Korrelation von Diffusions- und Magnetisierungstransfer-Bildgebung / Characterization of microstructural tissue changes in sporadic Creutzfeldt-Jakob disease through correlation of magnetization transfer and diffusion MRI

Matros, Markus 06 July 2015 (has links)
Neuartige Kontraste in der Magnetresonanz-Bildgebung wie Diffusionswichtung (DW) oder Magnetisierungstransfer (MT) finden zunehmend Verwendung in der klinischen Diagnostik. Während bei der DW der Kontrast durch unterschiedliche Diffusionseigenschaften von Wassermolekülen in Gewebe verursacht wird, wird der MT-Kontrast durch unterschiedliche Anteile an gebundenen und freien Protonen im Gewebe beeinflusst. Der MT basiert auf einer selektiven Sättigung der an Makromolekülen gebundenen Protonen und dem anschließenden Transfer dieser Sättigung der Magnetisierung auf freie Protonen. Dieser Austausch führt zu einem Abfall der Signalsättigung von freien Protonen. Diese Methode besitzt das Potential, Rückschlüsse auf spezifische mikrostrukturelle Veränderung im Gewebe zu ziehen. In der vorliegenden Pilotstudie wurde ein neuer Parameter zur Beschreibung des MT-Kontrastes - die MT-Sättigung - auf ihr Potential untersucht,  Gewebeveränderungen in einem Teil der Basalganglien bei der sporadischen Creutzfeldt-Jakob-Erkrankung (sCJK) zu detektieren. Typische mikrostrukturelle Gewebeveränderungen bei der sCJK beinhalten die Ablagerungen pathologischer Prion-Proteine, spongiformen Umbau des Neuropils sowie astrozytäre Gliose und Nervenzellverlust. Anonymisierte klinisch-diagnostische MRT-Bilddaten (3D-FLASH, DWI) von 5 Patienten mit definitiver oder wahrscheinlicher sCJD wurden retrospektiv untersucht und mit denen altersangepasster gesunder Kontrollen verglichen. Mittels einer ROI-Analyse auf den MT-Karten wurden neben dem Caput des Ncl. caudatus, dem Putamen und dem Pulvinar auch MT-Werte in der Amygdala bestimmt. Im Gegensatz zum Pulvinar und zur Amygdala konnten mit dieser Methode im Ncl. caudatus und im Putamen Veränderungen aufgezeigt werden. Hier wurden im Vergleich zu einer gesunden Kontrollkohorte in beiden Strukturen signifikant niedrigere MT-Werte bei sCJK-Patienten gefunden. Eine Regressionsanalyse gegen die DW-MRT, dem etablierten diagnostischen Kriterium, ergab eine signifikante positive Korrelation von MT und mittlerer Diffusivität (MD), die auf einen Zusammenhang von erhöhten Diffusionsbarrieren und erhöhtem Wassergehalt schließen lässt. Diese Korrelation könnte auf mikrozystische Veränderungen im Neuropil zurückzuführen sein. Eine inverse Korrelation im Pulvinar sowohl in der erkrankten als auch in der gesunden Kohorte deutet dagegen auf inhärent strukturelle Barrieren hin, die die Diffusion dominierend einschränken. Die MT-Sättigung hat somit das Potential, als diagnostisches Kriterium bei der sCJK eingesetzt zu werden. Der Informationsgewinn kann hierdurch gesteigert werden, indem verschiedene quantitative MR-Techniken miteinander kombiniert werden.
119

Two statistical problems related to credit scoring / Tanja de la Rey.

De la Rey, Tanja January 2007 (has links)
This thesis focuses on two statistical problems related to credit scoring. In credit scoring of individuals, two classes are distinguished, namely low and high risk individuals (the so-called "good" and "bad" risk classes). Firstly, we suggest a measure which may be used to study the nature of a classifier for distinguishing between the two risk classes. Secondly, we derive a new method DOUW (detecting outliers using weights) which may be used to fit logistic regression models robustly and for the detection of outliers. In the first problem, the focus is on a measure which may be used to study the nature of a classifier. This measure transforms a random variable so that it has the same distribution as another random variable. Assuming a linear form of this measure, three methods for estimating the parameters (slope and intercept) and for constructing confidence bands are developed and compared by means of a Monte Carlo study. The application of these estimators is illustrated on a number of datasets. We also construct statistical hypothesis to test this linearity assumption. In the second problem, the focus is on providing a robust logistic regression fit and the identification of outliers. It is well-known that maximum likelihood estimators of logistic regression parameters are adversely affected by outliers. We propose a robust approach that also serves as an outlier detection procedure and is called DOUW. The approach is based on associating high and low weights with the observations as a result of the likelihood maximization. It turns out that the outliers are those observations to which low weights are assigned. This procedure depends on two tuning constants. A simulation study is presented to show the effects of these constants on the performance of the proposed methodology. The results are presented in terms of four benchmark datasets as well as a large new dataset from the application area of retail marketing campaign analysis. In the last chapter we apply the techniques developed in this thesis on a practical credit scoring dataset. We show that the DOUW method improves the classifier performance and that the measure developed to study the nature of a classifier is useful in a credit scoring context and may be used for assessing whether the distribution of the good and the bad risk individuals is from the same translation-scale family. / Thesis (Ph.D. (Risk Analysis))--North-West University, Potchefstroom Campus, 2008.
120

Two statistical problems related to credit scoring / Tanja de la Rey.

De la Rey, Tanja January 2007 (has links)
This thesis focuses on two statistical problems related to credit scoring. In credit scoring of individuals, two classes are distinguished, namely low and high risk individuals (the so-called "good" and "bad" risk classes). Firstly, we suggest a measure which may be used to study the nature of a classifier for distinguishing between the two risk classes. Secondly, we derive a new method DOUW (detecting outliers using weights) which may be used to fit logistic regression models robustly and for the detection of outliers. In the first problem, the focus is on a measure which may be used to study the nature of a classifier. This measure transforms a random variable so that it has the same distribution as another random variable. Assuming a linear form of this measure, three methods for estimating the parameters (slope and intercept) and for constructing confidence bands are developed and compared by means of a Monte Carlo study. The application of these estimators is illustrated on a number of datasets. We also construct statistical hypothesis to test this linearity assumption. In the second problem, the focus is on providing a robust logistic regression fit and the identification of outliers. It is well-known that maximum likelihood estimators of logistic regression parameters are adversely affected by outliers. We propose a robust approach that also serves as an outlier detection procedure and is called DOUW. The approach is based on associating high and low weights with the observations as a result of the likelihood maximization. It turns out that the outliers are those observations to which low weights are assigned. This procedure depends on two tuning constants. A simulation study is presented to show the effects of these constants on the performance of the proposed methodology. The results are presented in terms of four benchmark datasets as well as a large new dataset from the application area of retail marketing campaign analysis. In the last chapter we apply the techniques developed in this thesis on a practical credit scoring dataset. We show that the DOUW method improves the classifier performance and that the measure developed to study the nature of a classifier is useful in a credit scoring context and may be used for assessing whether the distribution of the good and the bad risk individuals is from the same translation-scale family. / Thesis (Ph.D. (Risk Analysis))--North-West University, Potchefstroom Campus, 2008.

Page generated in 0.0701 seconds