• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 61
  • 20
  • 5
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 260
  • 260
  • 74
  • 74
  • 71
  • 60
  • 51
  • 51
  • 45
  • 41
  • 39
  • 31
  • 26
  • 25
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Antiviral Resistance and Dynamic Treatment and Chemoprophylaxis of Pandemic Influenza

Paz, Sandro 21 March 2014 (has links)
Public health data show the tremendous economic and societal impact of pandemic influenza in the past. Currently, the welfare of society is threatened by the lack of planning to ensure an adequate response to a pandemic. This preparation is difficult because the characteristics of the virus that would cause the pandemic are unknown, but primarily because the response requires tools to support decision-making based on scientific methods. The response to the next pandemic influenza will likely include extensive use of antiviral drugs, which will create an unprecedented selective pressure for the emergence of antiviral resistant strains. Nevertheless, the literature has insufficient exhaustive models to simulate the spread and mitigation of pandemic influenza, including infection by an antiviral resistant strain. We are building a large-scale simulation optimization framework for development of dynamic antiviral strategies including treatment of symptomatic cases and chemoprophylaxis of pre- and post-exposure cases. The model considers an oseltamivir-sensitive strain and a resistant strain with low/high fitness cost, induced by the use of the several antiviral measures. The mitigation strategies incorporate age/immunitybased risk groups for treatment and pre-/post-exposure chemoprophylaxis, and duration of pre-exposure chemoprophylaxis. The model is tested on a hypothetical region in Florida, U.S., involving more than one million people. The analysis is conducted under different virus transmissibility and severity scenarios, varying intensity of non-pharmaceutical interventions, measuring the levels of antiviral stockpile availability. The model is intended to support pandemic preparedness and response policy making.
82

A MULTI-COMMODITY NETWORK FLOW APPROACH FOR SEQUENCING REFINED PRODUCTS IN PIPELINE SYSTEMS

Acosta Amado, Rolando José 01 May 2011 (has links)
In the oil industry, there is a special class of pipelines used for the transportation of refined products. The problem of sequencing the inputs to be pumped through this type of pipeline seeks to generate the optimal sequence of batches of products and their destination as well as the amount of product to be pumped such that the total operational cost of the system, or another operational objective, is optimized while satisfying the product demands according to the requirements set by the customers. This dissertation introduces a new modeling approach and proposes a solution methodology for this problem capable of dealing with the topology of all the scenarios reported in the literature so far. The system representation is based on a 1-0 multi commodity network flow formulation that models the dynamics of the system, including aspects such as conservation of product flow constraints at the depots, travel time of products from the refinery to their depot destination and what happens upstream and downstream the line whenever a product is being received at a given depot while another one is being injected into the line at the refinery. It is assumed that the products are already available at the refinery and their demand at each depot is deterministic and known beforehand. The model provides the sequence, the amounts, the destination and the trazability of the shipped batches of different products from their sources to their destinations during the entire horizon planning period while seeking the optimization of pumping and inventory holding costs satisfying the time window constraints. A survey for the available literature is presented. Given the problem structure, a decomposition based solution procedure is explored with the intention of exploiting the network structure using the network simplex method. A branch and bound algorithm that exploits the dynamics of the system assigning priorities for branching to a selected set of variables is proposed and its computational results for the solution, obtained via GAMS/CPLEX, of the formulation for random instances of the problem of different sizes are presented. Future research directions on this field are proposed.
83

Decision Support Models for Design of Fortified Distribution Networks

Li, Qingwei 01 January 2011 (has links)
Lean distribution networks have been facing an increased exposure to the risk of unpredicted disruptions causing significant economic forfeitures. At the same time, the existing literature contains very few studies that examine the impact of fortification of facilities for improving network reliability. This dissertation presents three related classes of models that support the design of reliable distribution networks. The models extend the uncapacitated P-median and fixed-charge location models by considering heterogeneous facility failure probabilities, supplier backups, and facility fortification within a finite budget. The first class of models considers binary fortification via linear fortification functions. The second class of models extends binary fortification to partial (continuous) reliability improvement with linear fortification. This extension allows a more efficient utilization of limited fortification resources. The third class of models generalizes linear fortification to nonlinear to reflect the effect of diminishing marginal reliability improvement from fortification investment. For each of the models, we develop solution algorithms and demonstrate their computational efficiency. We present a detailed discussion on the novelty of the proposed models. The models are intended to support corporate decisions on the design of robust distribution networks using limited fortification resources.
84

Περιβάλλουσα ανάλυση δεδομένων / Data envelopment analysis

Σαΐττης, Κωνσταντίνος 25 May 2015 (has links)
Η παρούσα διπλωματική εργασία σκοπεύει στην παρουσίαση και ανάλυση της μεθόδου της Περιβάλλουσας Ανάλυσης Δεδομένων, η οποία δημιουργήθηκε για την αξιολόγηση της αποδοτικότητας οργανωτικών μονάδων όπως τα τραπεζικά υποκαταστήματα, σχολεία, νοσοκομεία ή εστιατόρια. Το κλειδί που μας επιτρέπει την σύγκριση αυτών των μονάδων βρίσκετε στους πόρους που χρησιμοποιούν για την παραγωγή έργου. Η Περιβάλλουσα Ανάλυση Δεδομένων (Data Envelopment Analysis) πρωτοπαρουσιάστηκε το 1978 από τους Charnes Cooper και Rhodes σε μία μελέτη τους (Charnes, et al.1978; Cooper 1978; Rhodes 1978). Η μελέτη αναφερόταν σε εκτιμήσεις της αποδοτικότητας μη κερδοσκοπικών οργανισμών και μπορεί δε να θεωρηθεί επέκταση της τεχνικής αποδοτικότητας, δοσμένης από τον Farell το 1957. Στο πρώτο κεφάλαιο παρουσιάζεται το θεωρητικό υπόβαθρο της μεθόδου, τα μοντέλα που κρύβονται πίσω από την μέθοδο, τον τρόπο υπολογισμού της αποδοτικότητας, τις παραδοχές πίσω από το σύνολο δυνατοτήτων παραγωγής, γραφική αναπαράσταση της μεθόδου και ένα παράδειγμα το οποίο είναι συνδιασμός των πιο πάνω. Στο 2ο κεφάλαιο παρουσιάζουμε εναλλακτικά μοντέλα τα οποία είναι προεκτάσεις των βασικών μοντέλων της μεθόδου, αναδεικνύοντας την προσαρμοστικότητα της μεθόδου. Αρκετά από αυτά τα μοντέλα προέκυψαν από την ανάγκη αντιμετώπισης αρκετών ασυνεπειών. Μερικά απο τα μοντέλα που παρουσιάζουμε είναι το προσθετικό μοντέλο, το πολλαπλαστικό μοντέλο και μοντέλα με εξωγενείς και κατηγορικές μεταβλητές. Το προσθετικό μοντέλο σε αντίθεση με τα μοντέλα CCR και BCC έχει την ικανότητα ελαχιστοποίηση των εισροών και μεγιστοποίησης των εκροών ταυτόχρονα. Αυτό ήταν αδύνατο στα μοντέλο CCR και BCC καθώς αυτά μπορούσαν είτε να ελαχιστοποιήσουν της εισροές είτε στην μεγιστοποίηση των εκροών αλλά όχι και τα 2 ταυτόχρονα. Επίσης στο 2ο κεφάλαιο παρουσιάζεται η χρήση απολύτων φραγμάτων στους συντελεστές βαρύτητας ιδιομορφίες των συντελεστών βαρύτητας οι οποίοι συντελούν στην ασυνέπεια της μεθόδου με την ιδιόμορφη συμπεριφορά τους. / This thesis aims at presenting and analyzing the method of Data Envelopment Analysis, which was created to assess the efficiency of organizational units such as bank branches, schools, hospitals and restaurants. The key that enables us to compare these units is the kind of resources they used to produce results. Data Envelopment Analysis was introduced in 1978 by Charnes Cooper and Rhodes in their seminar study (Charnes, et al.1978; Cooper 1978; Rhodes 1978). The paper refers to estimations of the efficiency, of non-profit organizations and may be considered as an extension of technical efficiency, given by Farell 1957. In the first chapter we are presenting the theoretical background of the method, the linear models behind the method, methods for calculating the efficiency, assumptions needed for production possibility set, graphical representations of the process and an example which is a combination all of the above. The second chapter presents alternative models that are extensions of the basic models of the process, highlighting the versatility of the method. Several of those models arose from the need to address a number of inconsistencies. The models we are presenting in these section are the additive model, the extended additive model, the multiplicative model and models with exogenous and categorical variables. The additive model in contrast with the CCR and BCC models have the ability to minimize inputs and maximize outputs simultaneously. This was impossible in the CCR and BCC model as they could either minimize or maximize inputs outputs but not the two simultaneously. Also we present the use of absolute limits on the weight coefficients whose peculiar behavior contribute to some of the inconsistencies observed in the method.
85

Ανάλυση και υπολογιστική πολυπλοκότητα τεχνικών επίλυσης προβλημάτων γραμμικού προγραμματισμού

Κατσίκης, Αναστάσιος 08 February 2010 (has links)
Το πρώτο κεφάλαιο περιλαμβάνει μια ιστορική αναδρομή σχετικά με τη γέννηση και την ανάπτυξη της Επιχειρησιακής Έρευνας και του Γραμμικού Προγραμματισμού. Επίσης παρουσιάζεται το χρονικό των μεγαλυτέρων ανακαλύψεων: ο αλγόριθμος Simplex (Dantzig-1949), ο ελλειψοειδής αλγόριθμος (Khachian-1979) και ο αλγόριθμος εσωτερικών σημείων (Karmarkar-1983). Στη συνέχεια - δεύτερο κεφάλαιο - γίνεται η θεωρητική θεμελίωση της μεθόδου Simplex, συμπεριλαμβάνοντας τόσο την γεωμετρική-εποπτική παρουσίαση της μεθόδου, όσο και την αυστηρή αλγεβρική τεκμηρίωσή της μέσω θεωρημάτων. Το τρίτο κεφάλαιο αφιερώθηκε στον αλγόριθμο των ελλειψοειδών, στη μέθοδο δηλαδή που ουσιαστικά απέδειξε ότι τα προβλήματα του γραμμικού προγραμματισμού μπορούν να λυθούν σε πολυωνυμικό χρόνο. Στο τέταρτο κεφάλαιο παρουσιάζεται η πιο σύγχρονη τάση στον τομέα επίλυσης προβλημάτων γραμμικού προγραμματισμού: οι μέθοδοι εσωτερικού σημείου. Συγκεκριμένα αναπτύσσεται ο αλγόριθμος του Karmakar, η κατηγορία των μεθόδων ομοπαραλληλικής αλλαγής κλίμακας και ο πρωτεύοντας-δυϊκός αλγόριθμος εσωτερικού σημείου. Τέλος, στο πέμπτο κεφάλαιο περιλαμβάνεται η παρουσίαση της έννοιας της υπολογιστικής πολυπλοκότητας αλγορίθμων, η πλήρης ανάλυση της πολυπλοκότητας των αλγορίθμων Simplex και εσωτερικού σημείου του Karmakar, καθώς και η σύγκριση των δύο αλγορίθμων. / The first chapter includes a historical retrospection in respect of the birth and growth of Operational Research and Linear Programming. Furthermore, the chronicle of the biggest discoveries is presented: the Simplex algorithm (Dantzig-1949), the ellipsoid algorithm (Khachian-1979) and the interior point algorithm (Karmarkar-1983). Thereafter -in the second chapter- the theoretical foundation of Simplex method is presented, including both the geometric- supervisory presentation and the strict algebraic documentation of the method via theorems. The third chapter refers to the ellipsoid algorithm, namely the method that proved that the problems of linear programming can be solved in polynomial time. In the fourth chapter, the most contemporary tendency in the field of solving problems of linear programming, is presented: the methods of interior point. Particularly, the algorithm of Karmakar and the primal-dual algorithm of interior point are expounded. Finally, the fifth chapter includes the presentation of the concept of computational complexity of algorithms, the complete analysis of complexity of algorithms Simplex and interior point of Karmakar, as well as the comparison of the two algorithms.
86

Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionados /

Franco, Bruno Chaves. January 2011 (has links)
Resumo: Esta pesquisa propõe o planejamento econômico de gráficos de controle ̅ para o monitoramento de uma característica de qualidade na qual as observações se ajustam a um modelo autorregressivo de primeira ordem com erro adicional. O modelo de custos de Duncan é usado para selecionar os parâmetros do gráfico, tamanho da amostra, intervalo de amostragem e os limites de controle. Utiliza-se o algoritmo genético na busca do custo mínimo de monitoramento. Para determinação do número médio de amostras até o sinal e o número de alarmes falsos são utilizadas Cadeias de Markov. Uma análise de sensibilidade mostrou que a autocorrelação provoca efeito adverso nos parâmetros do gráfico elevando seu custo de monitoramento e reduzindo sua eficiência / Abstract: This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency / Orientadora: Marcela Aparecida Guerreiro Machado / Coorientadora: Antonio Fernando Branco Costa / Banca: Fernando Augusto Silva Marins / Banca: Anderson Paula de Paiva / Mestre
87

Optimising blood donation session scheduling in south east England

Jeffries, Thomas January 2015 (has links)
It is essential that all countries operate a form of blood banking service, where blood is collected at donation sessions, stored and then distributed to local healthcare providers. It is imperative that these services are efficiently managed to ensure a safe supply of blood and that costs and wastages are kept minimal. Previous works in the area of blood management have focussed primarily on the perishable inventory problem and on routing blood deliveries to hospitals; there has been relatively little work focusing on scheduling blood donation sessions. The primary aim of this research is to provide a tool that allows the National Blood Service (the English and Welsh blood service) to schedule donation sessions so that collection targets are met in such a way that costs are minimised (the Blood Scheduling Problem). As secondary aims, the research identifies the key types of data that blood services should be collecting for this type of problem. Finally, various what-if scenarios are considered, specifically improv- ing donor attendance through paying donors and the proposed changes to the inter-donation times for male and female donors. The Blood Scheduling Problem is formulated as a Mixed Integer Linear Programming (MILP) problem and solved using a variable bound heuristic. Data from the South East of England is used to create a collection schedule, with all further analysis also being carried out on this data set. It was possible to make improvements to the number of units under collected in the current schedule, moreover the number of venues and panels operated could be reduced. Further- more, it was found that paying donors to donate was uneconomical. Finally, changing the inter-donation times could lead to a reduction in the number of shortfalls, even when demand was increased by as much as 20%. Though the model is specific to England and Wales, it can easily be adapted to other countries’ blood services. It is hoped that this model will provide blood services with a model to help them better schedule donation sessions and allow them to identify the data necessary to better understand their performance.
88

Proposição de uma heurística utilizando Buscatabu para a resolução do problema de escalonamento de veículos com múltiplas garagens

Casalinho, Gilmar D'Agostini Oliveira January 2012 (has links)
Os problemas logísticos estão se apoiando de forma bastante expressiva na pesquisa operacional a fim de obter uma maior eficiência em suas operações. Dentre os vários problemas relacionados à designação de veículos em um sistema logístico, o de escalonamento de veículos com múltiplas garagens, MDVSP (Multiple Depot Vehicle Scheduling Problem), vem sendo abordado em diversas pesquisas. O MDVSP pressupõe a existência de garagens que interferem no planejamento das sequências com as quais as viagens devem ser executadas. Frequentemente, métodos exatos não podem resolver as grandes instâncias encontradas na prática e, para poder levá-las em consideração, várias abordagens heurísticas estão sendo desenvolvidas. O principal objetivo deste trabalho, portanto, foi solucionar o MDVSP através de uma heurística utilizando o método de busca-tabu. A principal motivação para a realização deste trabalho surgiu a partir da indicação de que apenas recentemente o uso de meta-heurísticas está sendo aplicado ao MDVSP (Pepin et al. 2008) e das limitações elencadas no estudo de Rohde (2008), o qual utilizou o algoritmo branch-and-bound em uma das etapas da heurística apresentada para resolver o problema, o que fez aumentar o tempo de resolução do problema. O método de pesquisa para solução deste problema foi baseado em adaptações das tradicionais técnicas de pesquisa operacional, e propiciou a resolução do MDVSP apresentando resultados bastante competitivos quanto ao custo da função objetivo, número de veículos utilizados e tempo computacional necessário. / Currently the logistical problems are relying quite significantly on Operational Research in order to achieve greater efficiency in their operations. Among the various problems related to the vehicles scheduling in a logistics system, the Multiple Depot Vehicle Scheduling Problem (MDVSP) has been addressed in several studies. The MDVSP presupposes the existence of depots that affect the planning of sequences to which travel must be performed. Often, exact methods cannot solve large instances encountered in practice and in order to take them into account, several heuristic approaches are being developed. The aim of this study was thus to solve the MDVSP using a meta-heuristic based on tabu-search method. The main motivation for this work came from the indication that only recently the use of meta-heuristics is being applied to MDVSP context (Pepin et al. 2008) and, also, the limitations listed by Rohde (2008) in his study, which used the branch-and-bound in one of the steps of the heuristic presented to solve the problem, which has increased the time resolution. The research method for solving this problem was based on adaptations of traditional techniques of Operational Research, and provided resolutions presenting very competitive results for the MDVSP such as the cost of the objective function, number of vehicles used and computational time.
89

A aplicação de modelos matemáticos em situações-problema empresariais, com uso do software LINDO

Rehfeldt, Márcia Jussara Hepp January 2009 (has links)
Esta tese tem por objetivo mostrar a possibilidade de observação da existência da aprendizagem significativa a partir do uso de modelos matemáticos quando os alunos do curso de administração equacionam situações-problema empresariais com o auxílio do software LINDO. A pesquisa foi realizada com discentes do Centro Universitário UNIVATES, situado em Lajeado, Rio Grande do Sul, quando estes frequentaram a disciplina Pesquisa Operacional. Os fundamentos teóricos estão embasados na teoria da aprendizagem significativa de Ausubel (1968, 2003), na pesquisa operacional e suas ferramentas de resolução, principalmente o software LINDO, bem como na modelagem matemática. Metodologicamente, foram aplicados instrumentos de avaliação de subsunçores relacionados à capacidade de modelagem de problemas de programação linear. Face à ausência de alguns subsunçores, foram utilizados organizadores avançados que serviram como mecanismos pedagógicos para estabelecer relações entre aquilo que os alunos já sabiam e o que deveriam saber. Posteriormente, cada aluno desenvolveu, no mínimo, dois modelos matemáticos e dois mapas conceituais, sendo os primeiros no início da pesquisa e outros ao final. Como resultado, percebeu-se que o ambiente de modelagem matemática sugerido por Barbosa (2006) favoreceu a observação de aprendizagem significativa (AUSUBEL, 2003) da programação linear quando os alunos abstraíram e resolveram situações-problema empresariais com o auxílio do software LINDO. Os modelos matemáticos finais evoluíram, na maioria dos casos, apresentando mais variáveis e restrições. Por meio dos modelos matemáticos e mapas conceituais, foi possível observar algumas evidências em relação às exigências profissionais do administrador como a capacidade de reconhecer e de definir problemas e equacionar soluções e a capacidade de pensar estrategicamente e introduzir modificações no processo produtivo. Cabe ressaltar que os modelos matemáticos ilustram o conhecimento que o aluno possui. Por isso, são diferentes, têm níveis diferentes e refletem a idiossincrasia do processo ensino-aprendizagem, como postulam Moreira (2005) e Biembengut (2003). / This thesis aims at demonstrating the possibility of observing the existence of the significant apprenticeship, proceeding from the use of mathematic models when business administration students solve corporative problem situations with the help of the LINDO software. The research was carried out with students at UNIVATES University Center, in the city of Lajeado, Rio Grande do Sul, while attending the subject of Operational Research. The theoretical basis lies on Ausubel's (1968, 2003) significant apprenticeship theory, on the operational research and its solving tools, mainly the LINDO software, as well as on mathematic modeling. Methodologically, subsumer evaluation instruments related to the modeling capacity of linear programming problems were applied. Due to the lack of some subsumers, advanced organizers were used that served as pedagogical mechanisms in order to establish relationships between what the students already knew and what the should know. Later, each student developed, at least, two mathematic models and two conceptual maps, the first being at the research commencement, and the others at its end. As a result, it was noted that the mathematic modeling environment suggested by Barbosa (2006) favored the observation of a significant apprenticeship (AUSUBEL, 2003) of linear programming when the students abstracted and solved corporative problem situations with the help of the LINDO software. The final mathematic models evolved presenting, in most cases, more variables and restrictions. Through of mathematic models and conceptual maps, it was possible to observe some evidences relative to business administrator's professional requirement, such as the capacity of identifying and solving problems and finding solutions, and the capacity of thinking strategically and introducing modifications into the productive process. It is necessary to be emphasized that the mathematic models illustrate the student's knowledge. Therefore, they are different, have different levels and reflect the idiosyncrasy of the teaching-learning process, as postulated by Moreira (2005) and Biembengut (2003).
90

Text analytics in business environments: a managerial and methodological approach

Marcolin, Carla Bonato January 2018 (has links)
O processo de tomada de decisão, em diferentes ambientes gerenciais, enfrenta um momento de mudança no contexto organizacional. Nesse sentido, Business Analytics pode ser visto como uma área que permite alavancar o valor dos dados, contendo ferramentas importantes para o processo de tomada de decisão. No entanto, a presença de dados em diferentes formatos representa um desafio. Nesse contexto de variabilidade, os dados de texto têm atraído a atenção das organizações, já que milhares de pessoas se expressam diariamente neste formato, em muitas aplicações e ferramentas disponíveis. Embora diversas técnicas tenham sido desenvolvidas pela comunidade de ciência da computação, há amplo espaço para melhorar a utilização organizacional de tais dados de texto, especialmente quando se volta para o suporte à tomada de decisões. No entanto, apesar da importância e disponibilidade de dados em formato textual para apoiar decisões, seu uso não é comum devido à dificuldade de análise e interpretação que o volume e o formato de dados em texto apresentam. Assim, o objetivo desta tese é desenvolver e avaliar um framework voltado ao uso de dados de texto em processos decisórios, apoiando-se em diversas técnicas de processamento de linguagem natural (PNL). Os resultados apresentam a validade do framework, usando como instância de demonstração de sua aplicabilidade o setor de turismo através da plataforma TripAdvisor, bem como a validação interna de performance e a aceitação por parte dos gestores da área consultados. / The decision-making process, in different management environments, faces a moment of change in the organizational context. In this sense, Business Analytics can be seen as an area that leverages the value of data, containing important tools for the decision-making process. However, the presence of data in different formats poses a challenge. In this context of variability, text data has attracted the attention of organizations, as thousands of people express themselves daily in this format in many applications and tools available. Although several techniques have been developed by the computer science community, there is ample scope to improve the organizational use of such text data, especially when it comes to decision-making support. However, despite the importance and availability of textual data to support decisions, its use is not common because of the analysis and interpretation challenge that the volume and the unstructured format of text data presents. Thus, the aim of this dissertation is to develop and evaluate a framework to contribute with the expansion and development of text analytics in decision-making processes, based on several natural language processing (NLP) techniques. The results presents the validity of the framework, using as a demonstration of its applicability the tourism sector through the TripAdvisor platform, as well as the internal validation of performance and the acceptance by managers.

Page generated in 0.1423 seconds