• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 220
  • 68
  • 62
  • 50
  • 21
  • 14
  • 13
  • 10
  • 9
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 549
  • 104
  • 73
  • 59
  • 56
  • 55
  • 55
  • 49
  • 42
  • 38
  • 38
  • 37
  • 35
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Classification of Carpiodes Using Fourier Descriptors: A Content Based Image Retrieval Approach

Trahan, Patrick 06 August 2009 (has links)
Taxonomic classification has always been important to the study of any biological system. Many biological species will go unclassified and become lost forever at the current rate of classification. The current state of computer technology makes image storage and retrieval possible on a global level. As a result, computer-aided taxonomy is now possible. Content based image retrieval techniques utilize visual features of the image for classification. By utilizing image content and computer technology, the gap between taxonomic classification and species destruction is shrinking. This content based study utilizes the Fourier Descriptors of fifteen known landmark features on three Carpiodes species: C.carpio, C.velifer, and C.cyprinus. Classification analysis involves both unsupervised and supervised machine learning algorithms. Fourier Descriptors of the fifteen known landmarks provide for strong classification power on image data. Feature reduction analysis indicates feature reduction is possible. This proves useful for increasing generalization power of classification.
432

Hodnocení slovní zásoby u dětí se sluchovým postižením ve školním věku / Lexicon evaluation of school-age children with hearing impairment

Sehnoutková, Adéla January 2019 (has links)
This thesis deals with observing differences in vocabulary in sign language with hearing impaired kids at school age who visit elementary school for hearing impaired. This work is divided between two parts, theoretical and practical. This part is divided into three chapters. First chapter deals with the importance of hearing and classification of the hearing impairment and diagnostics. In this thesis is also included speech development of hearing impaired children and hearing children. Next part also contains of specific features occurring in speech of the hearing impaired people. It includes a list of tests which can be used to evaluate vocabulary with children. The last chapter of the theoretical part describes the development of the child during school age. The theoretical part deals with vocabulary in sign language of children with hearing impairment at primary school for the hearing impaired. The research is processed in qualitative form and the main goal of the thesis is to see what differences can be observed in vocabulary of children with hearing impairment and verify whether the selected test material is suitable for working with these children. The test material could be a guidance for people how to work with these children while verifying their vocabulary knowledge KEY WORDS hearing...
433

Échantillonnage préférentiel adaptatif et méthodes bayésiennes approchées appliquées à la génétique des populations. / Adaptive multiple importance sampling and approximate bayesian computation with applications in population genetics.

Sedki, Mohammed Amechtoh 31 October 2012 (has links)
Dans cette thèse, on propose des techniques d'inférence bayésienne dans les modèles où la vraisemblance possède une composante latente. La vraisemblance d'un jeu de données observé est l'intégrale de la vraisemblance dite complète sur l'espace de la variable latente. On s'intéresse aux cas où l'espace de la variable latente est de très grande dimension et comportes des directions de différentes natures (discrètes et continues), ce qui rend cette intégrale incalculable. Le champs d'application privilégié de cette thèse est l'inférence dans les modèles de génétique des populations. Pour mener leurs études, les généticiens des populations se basent sur l'information génétique extraite des populations du présent et représente la variable observée. L'information incluant l'histoire spatiale et temporelle de l'espèce considérée est inaccessible en général et représente la composante latente. Notre première contribution dans cette thèse suppose que la vraisemblance peut être évaluée via une approximation numériquement coûteuse. Le schéma d'échantillonnage préférentiel adaptatif et multiple (AMIS pour Adaptive Multiple Importance Sampling) de Cornuet et al. [2012] nécessite peu d'appels au calcul de la vraisemblance et recycle ces évaluations. Cet algorithme approche la loi a posteriori par un système de particules pondérées. Cette technique est conçue pour pouvoir recycler les simulations obtenues par le processus itératif (la construction séquentielle d'une suite de lois d'importance). Dans les nombreux tests numériques effectués sur des modèles de génétique des populations, l'algorithme AMIS a montré des performances numériques très prometteuses en terme de stabilité. Ces propriétés numériques sont particulièrement adéquates pour notre contexte. Toutefois, la question de la convergence des estimateurs obtenus parcette technique reste largement ouverte. Dans cette thèse, nous montrons des résultats de convergence d'une version légèrement modifiée de cet algorithme. Sur des simulations, nous montrons que ses qualités numériques sont identiques à celles du schéma original. Dans la deuxième contribution de cette thèse, on renonce à l'approximation de la vraisemblance et onsupposera seulement que la simulation suivant le modèle (suivant la vraisemblance) est possible. Notre apport est un algorithme ABC séquentiel (Approximate Bayesian Computation). Sur les modèles de la génétique des populations, cette méthode peut se révéler lente lorsqu'on vise uneapproximation précise de la loi a posteriori. L'algorithme que nous proposons est une amélioration de l'algorithme ABC-SMC de DelMoral et al. [2012] que nous optimisons en nombre d'appels aux simulations suivant la vraisemblance, et que nous munissons d'un mécanisme de choix de niveauxd'acceptations auto-calibré. Nous implémentons notre algorithme pour inférer les paramètres d'un scénario évolutif réel et complexe de génétique des populations. Nous montrons que pour la même qualité d'approximation, notre algorithme nécessite deux fois moins de simulations par rapport à laméthode ABC avec acceptation couramment utilisée. / This thesis consists of two parts which can be read independently.The first part is about the Adaptive Multiple Importance Sampling (AMIS) algorithm presented in Cornuet et al.(2012) provides a significant improvement in stability and Effective Sample Size due to the introduction of the recycling procedure. These numerical properties are particularly adapted to the Bayesian paradigm in population genetics where the modelization involves a large number of parameters. However, the consistency of the AMIS estimator remains largely open. In this work, we provide a novel Adaptive Multiple Importance Sampling scheme corresponding to a slight modification of Cornuet et al. (2012) proposition that preserves the above-mentioned improvements. Finally, using limit theorems on triangular arrays of conditionally independant random variables, we give a consistensy result for the final particle system returned by our new scheme.The second part of this thesis lies in ABC paradigm. Approximate Bayesian Computation has been successfully used in population genetics models to bypass the calculation of the likelihood. These algorithms provide an accurate estimator by comparing the observed dataset to a sample of datasets simulated from the model. Although parallelization is easily achieved, computation times for assuring a suitable approximation quality of the posterior distribution are still long. To alleviate this issue, we propose a sequential algorithm adapted fromDel Moral et al. (2012) which runs twice as fast as traditional ABC algorithms. Itsparameters are calibrated to minimize the number of simulations from the model.
434

"Testes de hipótese e critério bayesiano de seleção de modelos para séries temporais com raiz unitária" / "Hypothesis testing and bayesian model selection for time series with a unit root"

Silva, Ricardo Gonçalves da 23 June 2004 (has links)
A literatura referente a testes de hipótese em modelos auto-regressivos que apresentam uma possível raiz unitária é bastante vasta e engloba pesquisas oriundas de diversas áreas. Nesta dissertação, inicialmente, buscou-se realizar uma revisão dos principais resultados existentes, oriundos tanto da visão clássica quanto da bayesiana de inferência. No que concerne ao ferramental clássico, o papel do movimento browniano foi apresentado de forma detalhada, buscando-se enfatizar a sua aplicabilidade na dedução de estatísticas assintóticas para a realização dos testes de hipótese relativos à presença de uma raíz unitária. Com relação à inferência bayesiana, foi inicialmente conduzido um exame detalhado do status corrente da literatura. A seguir, foi realizado um estudo comparativo em que se testa a hipótese de raiz unitária com base na probabilidade da densidade a posteriori do parâmetro do modelo, considerando as seguintes densidades a priori: Flat, Jeffreys, Normal e Beta. A inferência foi realizada com base no algoritmo Metropolis-Hastings, usando a técnica de simulação de Monte Carlo por Cadeias de Markov (MCMC). Poder, tamanho e confiança dos testes apresentados foram computados com o uso de séries simuladas. Finalmente, foi proposto um critério bayesiano de seleção de modelos, utilizando as mesmas distribuições a priori do teste de hipótese. Ambos os procedimentos foram ilustrados com aplicações empíricas à séries temporais macroeconômicas. / Testing for unit root hypothesis in non stationary autoregressive models has been a research topic disseminated along many academic areas. As a first step for approaching this issue, this dissertation includes an extensive review highlighting the main results provided by Classical and Bayesian inferences methods. Concerning Classical approach, the role of brownian motion is discussed in a very detailed way, clearly emphasizing its application for obtaining good asymptotic statistics when we are testing for the existence of a unit root in a time series. Alternatively, for Bayesian approach, a detailed discussion is also introduced in the main text. Then, exploring an empirical façade of this dissertation, we implemented a comparative study for testing unit root based on a posteriori model's parameter density probability, taking into account the following a priori densities: Flat, Jeffreys, Normal and Beta. The inference is based on the Metropolis-Hastings algorithm and on the Monte Carlo Markov Chains (MCMC) technique. Simulated time series are used for calculating size, power and confidence intervals for the developed unit root hypothesis test. Finally, we proposed a Bayesian criterion for selecting models based on the same a priori distributions used for developing the same hypothesis tests. Obviously, both procedures are empirically illustrated through application to macroeconomic time series.
435

Évaluation de paramètres de sûreté de fonctionnement en présence d'incertitudes et aide à la conception : application aux Systèmes Instrumentés de Sécurité / Evaluation of safety parameters under uncertainty and optimal design of systems : application to safety instrumented systems

Sallak, Mohamed 19 October 2007 (has links)
L'introduction de systèmes instrumentés dédiés aux applications de sécurité impose l'évaluation de leur sûreté de fonctionnement. On utilise généralement pour cela les bases de données de fiabilité génériques. Cependant, le retour d'expérience pour ces systèmes qui présentent en général des défaillances rares est insuffisant pour valider les résultats obtenus. En outre, la collecte de données de fiabilité et leur extrapolation à d'autres composants introduisent des incertitudes. Les travaux de cette thèse portent sur la problématique de la prise en compte des incertitudes relatives aux données de fiabilité des composants pour l'évaluation de la sûreté de fonctionnement des systèmes par le formalisme des sous ensembles flous. La méthodologie proposée est appliquée à l'évaluation des probabilités de défaillance des Systèmes Instrumentés de Sécurité (SIS) en présence de données de fiabilité imprécises. Nous introduisons deux nouveaux facteurs d'importance pour aider le concepteur. En outre, nous proposons une méthodologie d'aide à la conception des SIS basée sur la modélisation par réseaux de fiabilité et l'optimisation par des algorithmes génétiques de la structure des SIS pour le respect des niveaux d'intégrité de sécurité (SIL) exigés / The use of safety related systems imposes to evaluate their dependability. Laboratory data and generic data are often used to provide failure data of safety components to evaluate their dependability parameters. However, due to the lower solicitation of safety systems in plant, safety components have not been operating long enough to provide statistical valid failure data. Furthermore, measuring and collecting failure data have uncertainty associated with them, and borrowing data from laboratory and generic data sources involve uncertainty as well. Our contribution is to propose a fuzzy approach to evaluate dependability parameters of safety systems when there is an uncertainty about dependability parameters of systems components. This approach is applied to determine the failure probability on demand of Safety Instrumented Systems (SIS) in presence of uncertainty. Furthermore, we present an optimal design of SIS by using reliability graphs and genetic algorithms to identify the choice of components and design configuration in a SIS to meet the required SIL
436

[en] STATE SPACE MODEL FOR TIME SERIES WITH BIVARIATE POISSON DISTRIBUTION: AN APPLICATION OF DURBIN-KOOPMAN METODOLOGY / [pt] MODELO EM ESPAÇO DE ESTADO PARA SÉRIES TEMPORAIS COM DISTRIBUIÇÃO POISSON BIVARIADA: UMA APLICAÇÃO DA METODOLOGIA DURBIN-KOOPMAN

SERGIO EDUARDO CONTRERAS ESPINOZA 15 September 2004 (has links)
[pt] Nesta tese, consideramos um modelo de espaço de estado bivariado para dados de contagem. A abordagem usada para resolver integrais não-analíticas que se apresentam no modelo é uma natural extensão da metodologia proposta por Durbin e Koopman - (DK), no sentido de que o Modelo Gaussiano Aproximador deve possuir algumas matrizes de covariâncias diagonais. Esta modificação traz a vantagem de viabilizar o uso do tratamento univariado para séries multivariadas com as recursões de Kalman, o qual, como se sabe, é mais eficiente do que o tratamento usual e facilita o uso de inicializações exatas destas mesmas recursões. O vetor de estado do modelo proposto é definido usando-se abordagem estrutural, onde os elementos do vetor de estado têm interpretação direta como tendência e sazonalidade. Apresentamos exemplos simulados e reais para ilustrar o modelo. / [en] In this thesis we consider a state space model for bivariate observations of count data. The approach used to solve the non analytical integrals that appears as the solution of the resulting non-Gaussian filter is a natural extension of the methodology advocated by Durbin and Koopman (DK). In our approach the aproximated Gaussian Model (AGM), has a diagonal Covariance matrix, while in the original DK, this is a full matrix. This modification make it possible to use univariate Kalman recursoes to construct the AGM, resulting in a computationally more efficient solution for the estimation of a Bivariate Poisson model. This also facilitates the use of exact initialization of those recursions. The state vector is specified using the structural approach, where the state elements are components which have direct interpretation, such as trend and seasonals. In our bivariate set up the dependence between the bivariate vector of time series is accomplished by use of common components which drive both series. We present both simulation and real life examples illustrating the use of our model.
437

South Africa’s response in fulfilling her obligations to meet the legal measures of wetland conservation and wise use

Lemine, Bramley Jemain January 2018 (has links)
Thesis (MTech (Environmental Management))--Cape Peninsula University of Technology, 2018. / South Africa is a signatory to the Convention on Wetlands of International Importance especially as Waterfowl Habitat of 1971 (referred to as the Ramsar Convention), which is an international convention making provision for protection and wise use of wetlands. Article 3 of the Ramsar Convention requires signatories to formulate and implement their planning to promote wise use of wetlands within their jurisdiction. “Wise use of wetlands” is defined as “the maintenance of their ecological character, achieved through the implementation of ecosystem approaches, within the context of sustainable development” (Birnie & Boyle, 2009: 674). The concept of wise use has been interpreted to mean sustainable development (de Klemm & Shine, 1999: 47; Birnie & Boyle, 2009: 49; Kiss & Shelton, 2007: 93; Birnie & Boyle, 2009: 674; Sands, 2003: 604), as it pertains to wetlands. Having said this, the National Environmental Management Act 107 of 1998 (NEMA) sets out principles of sustainable development that every organ of state must apply in the execution of their duties. Due to the wise use-sustainable development link, two NEMA principles have been considered to form the basis of this study, i.e. sections 2(4)(l) and 2(4)(r). The first principle places an obligation upon the state to ensure that there is intergovernmental coordination and harmonisation of policies, legislation and action relating to the environment (read to include a wetland); and the second principle is to ensure that specific attention in the management and planning are had to wetlands. Ironically, factors that are identified as hindering wise use include, but are not limited to: conflicting and incomplete sectoral law, absence of monitoring procedures, the absence of legal measures for environmental management of water quantity and quality. Therefore, an analysis will be undertaken to determine the extent to which South Africa’s legislative framework regulating wetland conservation is fulfilling the requirements for the promotion of wise use, through these two principles. Focus was had to environmental and related legislation, policies and regulations that promote and/or constrain wetland conservation and wise use. This study identifies the flaws within the law; and proposes streamlining and, where apposite, amendments to the existing legislative framework regulating wetlands in order for South Africa to fulfil her obligations.
438

Analyse de sensibilité fiabiliste avec prise en compte d'incertitudes sur le modèle probabiliste - Application aux systèmes aérospatiaux / Reliability-oriented sensitivity analysis under probabilistic model uncertainty – Application to aerospace systems

Chabridon, Vincent 26 November 2018 (has links)
Les systèmes aérospatiaux sont des systèmes complexes dont la fiabilité doit être garantie dès la phase de conception au regard des coûts liés aux dégâts gravissimes qu’engendrerait la moindre défaillance. En outre, la prise en compte des incertitudes influant sur le comportement (incertitudes dites « aléatoires » car liées à la variabilité naturelle de certains phénomènes) et la modélisation de ces systèmes (incertitudes dites « épistémiques » car liées au manque de connaissance et aux choix de modélisation) permet d’estimer la fiabilité de tels systèmes et demeure un enjeu crucial en ingénierie. Ainsi, la quantification des incertitudes et sa méthodologie associée consiste, dans un premier temps, à modéliser puis propager ces incertitudes à travers le modèle numérique considéré comme une « boîte-noire ». Dès lors, le but est d’estimer une quantité d’intérêt fiabiliste telle qu’une probabilité de défaillance. Pour les systèmes hautement fiables, la probabilité de défaillance recherchée est très faible, et peut être très coûteuse à estimer. D’autre part, une analyse de sensibilité de la quantité d’intérêt vis-à-vis des incertitudes en entrée peut être réalisée afin de mieux identifier et hiérarchiser l’influence des différentes sources d’incertitudes. Ainsi, la modélisation probabiliste des variables d’entrée (incertitude épistémique) peut jouer un rôle prépondérant dans la valeur de la probabilité obtenue. Une analyse plus profonde de l’impact de ce type d’incertitude doit être menée afin de donner une plus grande confiance dans la fiabilité estimée. Cette thèse traite de la prise en compte de la méconnaissance du modèle probabiliste des entrées stochastiques du modèle. Dans un cadre probabiliste, un « double niveau » d’incertitudes (aléatoires/épistémiques) doit être modélisé puis propagé à travers l’ensemble des étapes de la méthodologie de quantification des incertitudes. Dans cette thèse, le traitement des incertitudes est effectué dans un cadre bayésien où la méconnaissance sur les paramètres de distribution des variables d‘entrée est caractérisée par une densité a priori. Dans un premier temps, après propagation du double niveau d’incertitudes, la probabilité de défaillance prédictive est utilisée comme mesure de substitution à la probabilité de défaillance classique. Dans un deuxième temps, une analyse de sensibilité locale à base de score functions de cette probabilité de défaillance prédictive vis-à-vis des hyper-paramètres de loi de probabilité des variables d’entrée est proposée. Enfin, une analyse de sensibilité globale à base d’indices de Sobol appliqués à la variable binaire qu’est l’indicatrice de défaillance est réalisée. L’ensemble des méthodes proposées dans cette thèse est appliqué à un cas industriel de retombée d’un étage de lanceur. / Aerospace systems are complex engineering systems for which reliability has to be guaranteed at an early design phase, especially regarding the potential tremendous damage and costs that could be induced by any failure. Moreover, the management of various sources of uncertainties, either impacting the behavior of systems (“aleatory” uncertainty due to natural variability of physical phenomena) and/or their modeling and simulation (“epistemic” uncertainty due to lack of knowledge and modeling choices) is a cornerstone for reliability assessment of those systems. Thus, uncertainty quantification and its underlying methodology consists in several phases. Firstly, one needs to model and propagate uncertainties through the computer model which is considered as a “black-box”. Secondly, a relevant quantity of interest regarding the goal of the study, e.g., a failure probability here, has to be estimated. For highly-safe systems, the failure probability which is sought is very low and may be costly-to-estimate. Thirdly, a sensitivity analysis of the quantity of interest can be set up in order to better identify and rank the influential sources of uncertainties in input. Therefore, the probabilistic modeling of input variables (epistemic uncertainty) might strongly influence the value of the failure probability estimate obtained during the reliability analysis. A deeper investigation about the robustness of the probability estimate regarding such a type of uncertainty has to be conducted. This thesis addresses the problem of taking probabilistic modeling uncertainty of the stochastic inputs into account. Within the probabilistic framework, a “bi-level” input uncertainty has to be modeled and propagated all along the different steps of the uncertainty quantification methodology. In this thesis, the uncertainties are modeled within a Bayesian framework in which the lack of knowledge about the distribution parameters is characterized by the choice of a prior probability density function. During a first phase, after the propagation of the bi-level input uncertainty, the predictive failure probability is estimated and used as the current reliability measure instead of the standard failure probability. Then, during a second phase, a local reliability-oriented sensitivity analysis based on the use of score functions is achieved to study the impact of hyper-parameterization of the prior on the predictive failure probability estimate. Finally, in a last step, a global reliability-oriented sensitivity analysis based on Sobol indices on the indicator function adapted to the bi-level input uncertainty is proposed. All the proposed methodologies are tested and challenged on a representative industrial aerospace test-case simulating the fallout of an expendable space launcher.
439

Redes complexas para classificação de dados via conformidade de padrão, caracterização de importância e otimização estrutural / Data classification in complex networks via pattern conformation, data importance and structural optimization

Carneiro, Murillo Guimarães 08 November 2016 (has links)
A classificação é uma tarefa do aprendizado de máquina e mineração de dados, na qual um classificador é treinado sobre um conjunto de dados rotulados de forma que as classes de novos itens de dados possam ser preditas. Tradicionalmente, técnicas de classificação trabalham por definir fronteiras de decisão no espaço de dados considerando os atributos físicos do conjunto de treinamento e uma nova instância é classificada verificando sua posição relativa a tais fronteiras. Essa maneira de realizar a classificação, essencialmente baseada nos atributos físicos dos dados, impossibilita que as técnicas tradicionais sejam capazes de capturar relações semânticas existentes entre os dados, como, por exemplo, a formação de padrão. Por outro lado, o uso de redes complexas tem se apresentado como um caminho promissor para capturar relações espaciais, topológicas e funcionais dos dados, uma vez que a abstração da rede unifica a estrutura, a dinâmica e as funções do sistema representado. Dessa forma, o principal objetivo desta tese é o desenvolvimento de métodos e heurísticas baseadas em teorias de redes complexas para a classificação de dados. As principais contribuições envolvem os conceitos de conformidade de padrão, caracterização de importância e otimização estrutural de redes. Para a conformidade de padrão, onde medidas de redes complexas são usadas para estimar a concordância de um item de teste com a formação de padrão dos dados, é apresentada uma técnica híbrida simples pela qual associações físicas e topológicas são produzidas a partir da mesma rede. Para a caracterização de importância, é apresentada uma técnica que considera a importância individual dos itens de dado para determinar o rótulo de um item de teste. O conceito de importância aqui é definido em termos do PageRank, algoritmo usado na engine de busca do Google para definir a importância de páginas da web. Para a otimização estrutural de redes, é apresentado um framework bioinspirado capaz de construir a rede enquanto otimiza uma função de qualidade orientada à tarefa, como, por exemplo, classificação, redução de dimensionalidade, etc. A última investigação apresentada no documento explora a representação baseada em grafo e sua habilidade para detectar classes de distribuições arbitrárias na tarefa de difusão de papéis semânticos. Vários experimentos em bases de dados artificiais e reais, além de comparações com técnicas bastante usadas na literatura, são fornecidos em todas as investigações. Em suma, os resultados obtidos demonstram que as vantagens e novos conceitos propiciados pelo uso de redes se configuram em contribuições relevantes para as áreas de classificação, sistemas de aprendizado e redes complexas. / Data classification is a machine learning and data mining task in which a classifier is trained over a set of labeled data instances in such a way that the labels of new instances can be predicted. Traditionally, classification techniques define decision boundaries in the data space according to the physical features of a training set and a new data item is classified by verifying its relative position to the boundaries. Such kind of classification, which is only based on the physical attributes of the data, makes traditional techniques unable to detect semantic relationship existing among the data such as the pattern formation, for instance. On the other hand, recent works have shown the use of complex networks is a promissing way to capture spatial, topological and functional relationships of the data, as the network representation unifies structure, dynamic and functions of the networked system. In this thesis, the main objective is the development of methods and heuristics based on complex networks for data classification. The main contributions comprise the concepts of pattern conformation, data importance and network structural optimization. For pattern conformation, in which complex networks are employed to estimate the membership of a test item according to the data formation pattern, we present, in this thesis, a simple hybrid technique where physical and topological associations are produced from the same network. For data importance, we present a technique which considers the individual importance of the data items in order to determine the label of a given test item. The concept of importance here is derived from PageRank formulation, the ranking measure behind the Googles search engine used to calculate the importance of webpages. For network structural optimization, we present a bioinspired framework, which is able to build up the network while optimizing a task-oriented quality function such as classification, dimension reduction, etc. The last investigation presented in this thesis exploits the graph representation and its hability to detect classes of arbitrary distributions for the task of semantic role diffusion. In all investigations, a wide range of experiments in artificial and real-world data sets, and many comparisons with well-known and widely used techniques are also presented. In summary, the experimental results reveal that the advantages and new concepts provided by the use of networks represent relevant contributions to the areas of classification, learning systems and complex networks.
440

The Indian Pharmaceutical Industry's Supply Chain Management Strategies

Bolineni, Prasad 01 January 2016 (has links)
Indian pharmaceutical companies spend one-third of their revenue from supply chain management (SCM) activities due to inherently poor transportation infrastructure. SCM is a vital function for many companies, as it is usually employed to lower expenses and increase sales for the company. SCM costs are higher in India than they are in other areas of the world, amounting to 13% of India's GDP. The purpose of this study was to explore SCM strategies Indian business leaders in the pharmaceutical industry have used to reduce the high costs associated with SCM. This study used a single case study research design and semistructured interviews to collect data from 3 SCM business leaders working in Indian pharmaceutical organizations and possessing successful experience in using SCM strategies to reduce high costs. Goldratt's (1990) theory of constraints was used as the conceptual framework for this study to identify challenges associated with SCM strategies. Data from semistructured interviews, observations, and company documents were processed and analyzed using data source triangulation, grouping the raw data into key themes. The following 3 themes emerged: distribution and logistics challenges, impact of SCM processes, and best practices and solutions. The implications for positive social change include the potential to reduce supply chain risk, which could lead to lower product prices for consumers, increased stakeholder satisfaction, and a higher standard of living.

Page generated in 0.0616 seconds