611 |
Оценка кредитоспособности заёмщика с учётом нейрокогнитивных факторов : магистерская диссертация / Assessment of creditworthiness of the borrower taking into account neurocognitive factorsСлепченко, И. А., Slepchenko, i. A. January 2024 (has links)
Магистерская диссертация посвящена исследованию оценки кредитоспособности заемщика с учетом нейрокогнитивных факторов. В современных финансовых институтах традиционные методы оценки кредитоспособности основываются преимущественно на анализе финансовых показателей и кредитной истории заемщика. Однако, развитие нейроэкономики и когнитивных наук открывает новые возможности для улучшения этих методик. Автором предложена методика оценки кредитоспособности, включающая нейрокогнитивные параметры. В качестве базы данных для проведения исследования использовались как традиционные финансовые показатели заемщиков, так и результаты нейрокогнитивных тестов. / The dissertаtion is devoted to the study of borrower's creditworthiness assessment taking into account neurocognitive factors. In modern financial institutions traditional methods of creditworthiness assessment are based mainly on the analysis of financial indicators and borrower's credit history. However, the development of neuroeconomics and cognitive sciences opens new opportunities to improve these techniques. The methodological part of the diploma is devoted to the development and testing of a creditworthiness assessment model that includes neurocognitive parameters. Both traditional financial indicators of borrowers and the results of neurocognitive tests are used as a database for the study.
|
612 |
Various resource allocation and optimization strategies for high bit rate communications on power linesSyed Muhammad, Fahad 17 March 2010 (has links) (PDF)
Ces dernières années, le développement des réseaux de communication indoor et outdoor et l'augmentation du nombre d'applications conduisent à un besoin toujours croissant de transmission de données à haut débit. Parmi les nombreuses technologies concurrentes, les communications par courant porteur en ligne (CPL) ont leur place en raison des infrastructures déjà disponibles. La motivation principale de cette thèse est d'augmenter le débit et la robustesse des systèmes CPL à porteuses multiples afin qu'ils puissent être utilisés efficacement dans les réseaux domestiques et pour la domotique. Le thème de ce travail de recherche est d'explorer différentes approches de modulation et de codage de canal en liaison avec plusieurs schémas d'allocation et d'optimisation des ressources. L'objectif est ici d'améliorer les capacités des CPL et d'être concurrent face aux autres solutions de communication à haut débit et de faire face efficacement aux inconvénients inhérents au réseau d'alimentation. Un certain nombre de stratégies d'allocation des ressources et d'optimisation sont étudiées pour améliorer les performances globales des systèmes CPL. La performance d'un système de communication est généralement mesurée en termes de débit, de marge de bruit et de taux d'erreur binaire (TEB) de la liaison. La maximisation de débit (RM) est étudiée pour les systèmes OFDM (en anglais orthogonal frequency division multiplexing) et LP-OFDM (en anglais linear precoded OFDM) sous la contrainte de densité spectrale de puissance (DSP). Deux contraintes différentes de taux d'erreur ont été appliquées au problème RM. La première contrainte est la contrainte de TEB crête où toutes les sous-porteuses ou séquences de précodage doivent respecter le TEB cible. Avec la deuxième contrainte, contrainte de TEB moyen, différentes sous-porteuses ou séquences de précodage sont affectées par des valeurs différentes de TEB et une contrainte de TEB moyen est imposée sur le symbole complet OFDM ou LP-OFDM. Les algorithmes d'allocation sont également proposés en prenant en compte les gains de codage de canal dans le processus d'allocation des ressources. En outre, un nouveau schéma de minimisation de TEB moyen est introduit qui minimise le TEB moyen de systèmes pour un débit donné et un masque imposé de DSP. Pour l'allocation des ressources dans un système à porteuses multiples, il est généralement supposé que l'état du canal (CSI) est parfaitement connu par l'émetteur. En réalité, les informations de CSI disponibles au point d'émission sont imparfaites. Aussi, nous avons également étudié des schémas d'allocation des ressources dans le cas de systèmes OFDM et LP-OFDM en prenant compte, et de manière efficace, les impacts des estimations bruitées. Plusieurs chaînes de communication sont aussi développées pour les systèmes OFDM et LP-OFDM.
|
613 |
Integrating Combinatorial Scheduling with Inventory Management and Queueing TheoryTerekhov, Daria 13 August 2013 (has links)
The central thesis of this dissertation is that by combining classical scheduling methodologies with those of inventory management and queueing theory we can better model, understand and solve complex real-world scheduling problems. In part II of this dissertation, we provide models of a realistic supply chain scheduling problem that capture both its combinatorial nature and its dependence on inventory availability. We present an extensive empirical evaluation of how well implementations of these models in commercially available software solve the problem. We are therefore able to address, within a specific problem, the need for scheduling to take into account related decision-making processes. In order to simultaneously deal with combinatorial and dynamic properties of real scheduling problems, in part III we propose to integrate queueing theory and deterministic scheduling. Firstly, by reviewing the queueing theory literature that deals with dynamic resource allocation and sequencing and outlining numerous future work directions, we build a strong foundation for the investigation of the integration of queueing theory and scheduling. Subsequently, we demonstrate that integration can take place on three levels: conceptual, theoretical and algorithmic. At the conceptual level, we combine concepts, ideas and problem settings from the two areas, showing that such combinations provide insights into the trade-off between long-run and short-run objectives. Next, we show that theoretical integration of queueing and scheduling can lead to long-run performance guarantees for scheduling algorithms that have previously been proved only for queueing policies. In particular, we are the first to prove, in two flow shop environments, the stability of a scheduling method that is based on the traditional scheduling literature and utilizes processing time information to make sequencing decisions. Finally, to address the algorithmic level of integration, we present, in an extensive future work chapter, one general approach for creating hybrid queueing/scheduling algorithms. To our knowledge, this dissertation is the first work that builds a framework for integrating queueing theory and scheduling. Motivated by characteristics of real problems, this dissertation takes a step toward extending scheduling research beyond traditional assumptions and addressing more realistic scheduling problems.
|
614 |
Integrating Combinatorial Scheduling with Inventory Management and Queueing TheoryTerekhov, Daria 13 August 2013 (has links)
The central thesis of this dissertation is that by combining classical scheduling methodologies with those of inventory management and queueing theory we can better model, understand and solve complex real-world scheduling problems. In part II of this dissertation, we provide models of a realistic supply chain scheduling problem that capture both its combinatorial nature and its dependence on inventory availability. We present an extensive empirical evaluation of how well implementations of these models in commercially available software solve the problem. We are therefore able to address, within a specific problem, the need for scheduling to take into account related decision-making processes. In order to simultaneously deal with combinatorial and dynamic properties of real scheduling problems, in part III we propose to integrate queueing theory and deterministic scheduling. Firstly, by reviewing the queueing theory literature that deals with dynamic resource allocation and sequencing and outlining numerous future work directions, we build a strong foundation for the investigation of the integration of queueing theory and scheduling. Subsequently, we demonstrate that integration can take place on three levels: conceptual, theoretical and algorithmic. At the conceptual level, we combine concepts, ideas and problem settings from the two areas, showing that such combinations provide insights into the trade-off between long-run and short-run objectives. Next, we show that theoretical integration of queueing and scheduling can lead to long-run performance guarantees for scheduling algorithms that have previously been proved only for queueing policies. In particular, we are the first to prove, in two flow shop environments, the stability of a scheduling method that is based on the traditional scheduling literature and utilizes processing time information to make sequencing decisions. Finally, to address the algorithmic level of integration, we present, in an extensive future work chapter, one general approach for creating hybrid queueing/scheduling algorithms. To our knowledge, this dissertation is the first work that builds a framework for integrating queueing theory and scheduling. Motivated by characteristics of real problems, this dissertation takes a step toward extending scheduling research beyond traditional assumptions and addressing more realistic scheduling problems.
|
615 |
Hybridization of particle Swarm Optimization with Bat Algorithm for optimal reactive power dispatchAgbugba, Emmanuel Emenike 06 1900 (has links)
This research presents a Hybrid Particle Swarm Optimization with Bat Algorithm (HPSOBA) based
approach to solve Optimal Reactive Power Dispatch (ORPD) problem. The primary objective of
this project is minimization of the active power transmission losses by optimally setting the control
variables within their limits and at the same time making sure that the equality and inequality
constraints are not violated. Particle Swarm Optimization (PSO) and Bat Algorithm (BA)
algorithms which are nature-inspired algorithms have become potential options to solving very
difficult optimization problems like ORPD. Although PSO requires high computational time, it
converges quickly; while BA requires less computational time and has the ability of switching
automatically from exploration to exploitation when the optimality is imminent. This research
integrated the respective advantages of PSO and BA algorithms to form a hybrid tool denoted as
HPSOBA algorithm. HPSOBA combines the fast convergence ability of PSO with the less
computation time ability of BA algorithm to get a better optimal solution by incorporating the BA’s
frequency into the PSO velocity equation in order to control the pace. The HPSOBA, PSO and BA algorithms were implemented using MATLAB programming language and tested on three (3)
benchmark test functions (Griewank, Rastrigin and Schwefel) and on IEEE 30- and 118-bus test
systems to solve for ORPD without DG unit. A modified IEEE 30-bus test system was further used
to validate the proposed hybrid algorithm to solve for optimal placement of DG unit for active
power transmission line loss minimization. By comparison, HPSOBA algorithm results proved to
be superior to those of the PSO and BA methods.
In order to check if there will be a further improvement on the performance of the HPSOBA, the
HPSOBA was further modified by embedding three new modifications to form a modified Hybrid
approach denoted as MHPSOBA. This MHPSOBA was validated using IEEE 30-bus test system to
solve ORPD problem and the results show that the HPSOBA algorithm outperforms the modified
version (MHPSOBA). / Electrical and Mining Engineering / M. Tech. (Electrical Engineering)
|
616 |
Analysis and Reconstruction of the Hematopoietic Stem Cell Differentiation Tree: A Linear Programming Approach for Gene SelectionGhadie, Mohamed A. January 2015 (has links)
Stem cells differentiate through an organized hierarchy of intermediate cell types to terminally differentiated cell types. This process is largely guided by master transcriptional regulators, but it also depends on the expression of many other types of genes. The discrete cell types in the differentiation hierarchy are often identified based on the expression or non-expression of certain marker genes. Historically, these have often been various cell-surface proteins, which are fairly easy to assay biochemically but are not necessarily causative of the cell type, in the sense of being master transcriptional regulators. This raises important questions about how gene expression across the whole genome controls or reflects cell state, and in particular, differentiation hierarchies. Traditional approaches to understanding gene expression patterns across multiple conditions, such as principal components analysis or K-means clustering, can group cell types based on gene expression, but they do so without knowledge of the differentiation hierarchy. Hierarchical clustering and maximization of parsimony can organize the cell types into a tree, but in general this tree is different from the differentiation hierarchy. Using hematopoietic differentiation as an example, we demonstrate how many genes other than marker genes are able to discriminate between different branches of the differentiation tree by proposing two models for detecting genes that are up-regulated or down-regulated in distinct lineages. We then propose a novel approach to solving the following problem: Given the differentiation hierarchy and gene expression data at each node, construct a weighted Euclidean distance metric such that the minimum spanning tree with respect to that metric is precisely the given differentiation hierarchy. We provide a set of linear constraints that are provably sufficient for the desired construction and a linear programming framework to identify sparse sets of weights, effectively identifying genes that are most relevant for discriminating different parts of the tree. We apply our method to microarray gene expression data describing 38 cell types in the hematopoiesis hierarchy, constructing a sparse weighted Euclidean metric that uses just 175 genes. These 175 genes are different than the marker genes that were used to identify the 38 cell types, hence offering a novel alternative way of discriminating different branches of the tree. A DAVID functional annotation analysis shows that the 175 genes reflect major processes and pathways active in different parts of the tree. However, we find that there are many alternative sets of weights that satisfy the linear constraints. Thus, in the style of random-forest training, we also construct metrics based on random subsets of the genes and compare them to the metric of 175 genes. Our results show that the 175 genes frequently appear in the random metrics, implicating their significance from an empirical point of view as well. Finally, we show how our linear programming method is able to identify columns that were selected to build minimum spanning trees on the nodes of random variable-size matrices.
|
617 |
Multikanálová dekonvoluce obrazů / Multichannel Image DeconvolutionBradáč, Pavel January 2009 (has links)
This Master Thesis deals with image restoration using deconvolution. The terms introducing into deconvolution theory like two-dimensional signal, distortion model, noise and convolution are explained in the first part of thesis. The second part deals with deconvolution methods via utilization of the Bayes approach which is based on the probability principle. The third part is focused on the Alternating Minimization Algorithm for Multichannel Blind Deconvolution. At the end this algorithm is written in Matlab with utilization of the NAG C Library. Then comparison of different optimization methods follows (simplex, steepest descent, quasi-Newton), regularization forms (Tichonov, Total Variation) and other parameters used by this deconvolution algorithm.
|
618 |
Statistical analysis of clinical trial data using Monte Carlo methodsHan, Baoguang 11 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In medical research, data analysis often requires complex statistical methods where no closed-form solutions are available. Under such circumstances, Monte Carlo (MC) methods have found many applications. In this dissertation, we proposed several novel statistical models where MC methods are utilized. For the first part, we focused on semicompeting risks data in which a non-terminal event was subject to dependent censoring by a terminal event. Based on an illness-death multistate survival model, we proposed flexible random effects models. Further, we extended our model to the setting of joint modeling where both semicompeting risks data and repeated marker data are simultaneously analyzed. Since the proposed methods involve high-dimensional integrations, Bayesian Monte Carlo Markov Chain (MCMC) methods were utilized for estimation. The use of Bayesian methods also facilitates the prediction of individual patient outcomes. The proposed methods were demonstrated in both simulation and case studies.
For the second part, we focused on re-randomization test, which is a nonparametric method that makes inferences solely based on the randomization procedure used in clinical trials. With this type of inference, Monte Carlo method is often used for generating null distributions on the treatment difference. However, an issue was recently discovered when subjects in a clinical trial were randomized with unbalanced treatment allocation to two treatments according to the minimization algorithm, a randomization procedure frequently used in practice. The null distribution of the re-randomization test statistics was found not to be centered at zero, which comprised power of the test. In this dissertation, we investigated the property of the re-randomization test and proposed a weighted re-randomization method to overcome this issue. The proposed method was demonstrated through extensive simulation studies.
|
619 |
Phase Unwrapping MRI Flow Measurements / Fasutvikning av MRT-flödesmätningarLiljeblad, Mio January 2023 (has links)
Magnetic resonance images (MRI) are acquired by sampling the current of induced electromotiveforce (EMF). EMF is induced due to flux of the net magnetic field from coherent nuclear spins with intrinsic magnetic dipole moments. The spins are excited by (non-ionizing) radio frequency electromagnetic radiation in conjunction with stationary and gradient magnetic fields. These images reveal detailed internal morphological structures as well as enable functional assessment of the body that can help diagnose a wide range of medical conditions. The aim of this project was to unwrap phase contrast cine magnetic resonance images, targeting the great vessels. The maximum encoded velocity (venc) is limited to the angular phase range [-π, π] radians. This may result in aliasing if the venc is set too low by the MRI personnel. Aliased images yield inaccurate cardiac stroke volume measurements and therefore require acquisition retakes. The retakes might be avoided if the images could be unwrapped in post-processing instead. Using computer vision, the angular phase of flow measurements as well as the angular phase of retrospectively wrapped image sets were unwrapped. The performances of three algorithms were assessed, Laplacian algorithm, sequential tree-reweighted message passing and iterative graph cuts. The associated energy formulation was also evaluated. Iterative graph cuts was shown to be the most robust with respect to the number of wraps and the energies correlated with the errors. This thesis shows that there is potential to reduce the number of acquisition retakes, although the MRI personnel still need to verify that the unwrapping performances are satisfactory. Given the promising results of iterative graph cuts, next it would be valuable to investigate the performance of a globally optimal surface estimation algorithm.
|
620 |
Environmental risk assessment associated with unregulated landfills in the Albert Luthuli Municipality, Mpumalanga Province, RSAMnisi, Fannie 31 August 2008 (has links)
Integrated management of municipal and hazardous waste is one of the challenges facing the new
municipalities in South Africa, especially those located in previously disadvantaged rural areas.
However, much of the research on solid and hazardous waste management in South Africa has
examined waste management problematics in urban areas, the majority of which are located within
the jurisdiction of local governments which are comparatively effective in terms of providing
adequate disposal services. By contrast, this study has examined the environmental risk
assessment associated with unregulated landfill sites in the Albert Luthuli municipality, in the
Mpumalanga province. The determination of the environmental risk was achieved by the use of
questionnaire surveys and landfill analysis forms in selected study areas.
The findings have highlighted a very high environmental risk, nearly four times and above, the
threshold limits set by the Department of Environmental Affairs and Tourism (DEAT, 2005:15) for
all of the landfill sites examined. Several exposure pathways stemming from associated
environmental impacts have also been identified for the study. The higher environmental risk
determined for the problem sites is ascribed to numerous factors, including their ill-planned
location, the sensitivity and vulnerability of the natural environment and adjacent rural settlements,
the lack of appropriate waste pre-treatment processes prior to disposal, and most significantly, the
lack of regulatory and control measures to contain the myriad of environmental problems
generated. In conclusion, it is recommended that several measures (including closure) should be
taken in order to reduce and contain the magnitude of environmental risks involved. / Environmental Sciences / M.Sc.(Environmental Sciences)
|
Page generated in 0.0923 seconds