• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 447
  • 175
  • 65
  • 51
  • 25
  • 9
  • 8
  • 8
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 945
  • 225
  • 165
  • 144
  • 135
  • 82
  • 80
  • 70
  • 65
  • 61
  • 61
  • 59
  • 56
  • 55
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

[en] MATHEMATICAL PROGRAMMING MODELS AND LOCAL SEARCH ALGORITHMS FOR THE OFFSHORE RIG SCHEDULING PROBLEM / [pt] MODELOS DE PROGRAMAÇÃO MATEMÁTICA E ALGORITMOS DE BUSCA LOCAL PARA O PROBLEMA DE PROGRAMAÇÃO DE SONDAS MARÍTIMAS

IURI MARTINS SANTOS 28 November 2018 (has links)
[pt] A exploração e produção (EeP) offshore de óleo e gás envolve várias operações complexas e importantes, como perfuração, avaliação, completação e manutenção de poços. A maioria dessas tarefas requer o uso de sondas, um recurso custoso e escasso que as companhias de petróleo precisam planejar e programar corretamente. Na literatura, este problema é chamado de Programação de Sondas. Todavia, existem poucos estudos relacionados aos poços marítimos e às atividades de perfuração e nenhum destes com funções objetivo e restrições realistas, como orçamento. Por isso, muitas empresas de petróleo têm fortes dificuldades no planejamento das sondas, resultando em grandes custos para elas. Com o objetivo de preencher essa lacuna, esta dissertação estuda um problema de programação de sondas em uma empresa petroleira e propõe um método híbrido para determinar a frota de sondas e seu cronograma, que minimize o orçamento da empresa. Dois modelos de programação matemática – um para minimização das sondas e outro para minimizar seu orçamento com variações da unidade de tempo utilizada (dia ou semana) – e várias heurísticas – usando algoritmos de busca local e variable neighborhood descent (VND) com três estruturas de vizinhança e duas estratégias de busca (first e best improvemment) e métodos construtivos- foram desenvolvidos e testados em duas instâncias (uma pequena e uma grande), baseadas em dados reais da empresa do caso de estudo. As três estruturas de vizinhanças são baseadas em movimentos de insert, uma delas não permite alterar as datas de alocação das tarefas na solução inicial, outra permite adiar tarefas e a última as posterga.Os resultados indicaram a dificuldade no desempenho dos modelos matemáticos nas grandes instâncias e uma forte capacidade das heurísticas para encontrar soluções similares com muito menos esforço computacional. Na instância pequena, o modelo exato para minimizar o orçamento encontrou soluções um pouco melhores que a heurística (diferença de entre 0,4 por cento e 5,6 por cento), embora necessitando de mais esforço computacional, principalmente os modelos com unidades de tempo em dias. Porém, na instância maior, as soluções da programação matemática possuíram altos gap (mais de 11 por cento) e altos tempos computacionais (pelo menos 12 horas), tendo o modelo matemático mais completo sido incapaz de encontrar soluções inteiras viáveis ou limites inferiores depois de mais de um dia rodando. Enquanto isso, as heurísticas foram capazes de encontrar soluções similares ou até melhores (desvios de -6 por cento e 14 por cento em relação a melhor solução exata) em um tempo muito menor, tendo 70 das 156 heurísticas desenvolvidas superado os modelos matemáticos. Além disso, os melhores resultados heurísticos foram utilizando algoritmos de variable neighborhood descent (VND) com estruturas de vizinhanças que realizavam movimentos de insert de tarefas em sondas existentes ou novas e permitiam postergar ou adiantar as tarefas das sondas. A abordagem hibrida foi comparada também com uma abordagem puramente heurística, tendo a primeira obtido melhores resultados. Por fim, os resultados demonstram que o método híbrido proposto combinando o modelo matemático que minimiza o número de sondas com as heurísticas de busca local é uma ferramenta de suporte a decisão rápida e prática, com potencial para reduzir milhões de dólares para as empresas petroleiras do mercado offshore, com capacidade para encontrar cronogramas próximos da solução ótima com pouco esforço computacional, mesmo em instâncias grandes onde a maioria dos métodos exatos é muito complexa e lenta. / [en] The offshore exploration and production (EandP) of Oil and Gas involves several complex and important operations and relies, mostly, in the use of rigs, a scarce and costly resource that oil companies need to properly plan and schedule. In the literature, this decision is called the Rig Scheduling Problem (RSP). However, there is not any study related to offshore wells and drilling activities with realistic objective functions. Aiming to fulfill this gap, this dissertation studies a rig scheduling problem of a real offshore company and proposes a matheuristic approach to determine a rigs fleet and schedule that minimizes the budget. Two mathematical models – one for rigs fleet minimization and another one that minimizes the rigs budget – and several heuristics – using local search (LS) and variable neighborhood descent (VND) algorithms with three neighborhood structures and also constructive methods – were developed and tested in two instances based on real data of the studied company. In the small instance, the programming model found slightly better solutions than the heuristic, despite requiring more computational effort. Nevertheless, in the large instance, the mathematical programming solutions present large gaps (over 11 percent) and an elevated computational time (at least 12 hours), while the heuristics can find similar (or even better) solutions in a shorter time (minutes), having 70 of 156 heuristics outperformed the mathematical models. Last, the matheuristic combination of the simplest mathematical model with the heuristics has found the best known solutions (BKS) of the large instance with a moderate computational effort.
532

A contribution to topological learning and its application in Social Networks / Une contribution à l'apprentissage topologique et son application dans les réseaux sociaux

Ezzeddine, Diala 01 October 2014 (has links)
L'Apprentissage Supervisé est un domaine populaire de l'Apprentissage Automatique en progrès constant depuis plusieurs années. De nombreuses techniques ont été développées pour résoudre le problème de classification, mais, dans la plupart des cas, ces méthodes se basent sur la présence et le nombre de points d'une classe donnée dans des zones de l'espace que doit définir le classifieur. Á cause de cela la construction de ce classifieur est dépendante de la densité du nuage de points des données de départ. Dans cette thèse, nous montrons qu'utiliser la topologie des données peut être une bonne alternative lors de la construction des classifieurs. Pour cela, nous proposons d'utiliser les graphes topologiques comme le Graphe de Gabriel (GG) ou le Graphes des Voisins Relatifs (RNG). Ces dernier représentent la topologie de données car ils sont basées sur la notion de voisinages et ne sont pas dépendant de la densité. Pour appliquer ce concept, nous créons une nouvelle méthode appelée Classification aléatoire par Voisinages (Random Neighborhood Classification (RNC)). Cette méthode utilise des graphes topologiques pour construire des classifieurs. De plus, comme une Méthodes Ensemble (EM), elle utilise plusieurs classifieurs pour extraire toutes les informations pertinentes des données. Les EM sont bien connues dans l'Apprentissage Automatique. Elles génèrent de nombreux classifieurs à partir des données, puis agrègent ces classifieurs en un seul. Le classifieur global obtenu est reconnu pour être très eficace, ce qui a été montré dans de nombreuses études. Cela est possible car il s'appuie sur des informations obtenues auprès de chaque classifieur qui le compose. Nous avons comparé RNC à d'autres méthodes de classification supervisées connues sur des données issues du référentiel UCI Irvine. Nous constatons que RNC fonctionne bien par rapport aux meilleurs d'entre elles, telles que les Forêts Aléatoires (RF) et Support Vector Machines (SVM). La plupart du temps, RNC se classe parmi les trois premières méthodes en terme d'eficacité. Ce résultat nous a encouragé à étudier RNC sur des données réelles comme les tweets. Twitter est un réseau social de micro-blogging. Il est particulièrement utile pour étudier l'opinion à propos de l'actualité et sur tout sujet, en particulier la politique. Cependant, l'extraction de l'opinion politique depuis Twitter pose des défis particuliers. En effet, la taille des messages, le niveau de langage utilisé et ambiguïté des messages rend très diffcile d'utiliser les outils classiques d'analyse de texte basés sur des calculs de fréquence de mots ou des analyses en profondeur de phrases. C'est cela qui a motivé cette étude. Nous proposons d'étudier les couples auteur/sujet pour classer le tweet en fonction de l'opinion de son auteur à propos d'un politicien (un sujet du tweet). Nous proposons une procédure qui porte sur l'identification de ces opinions. Nous pensons que les tweets expriment rarement une opinion objective sur telle ou telle action d'un homme politique mais plus souvent une conviction profonde de son auteur à propos d'un mouvement politique. Détecter l'opinion de quelques auteurs nous permet ensuite d'utiliser la similitude dans les termes employés par les autres pour retrouver ces convictions à plus grande échelle. Cette procédure à 2 étapes, tout d'abord identifier l'opinion de quelques couples de manière semi-automatique afin de constituer un référentiel, puis ensuite d'utiliser l'ensemble des tweets d'un couple (tous les tweets d'un auteur mentionnant un politicien) pour les comparer avec ceux du référentiel. L'Apprentissage Topologique semble être un domaine très intéressant à étudier, en particulier pour résoudre les problèmes de classification...... / Supervised Learning is a popular field of Machine Learning that has made recent progress. In particular, many methods and procedures have been developed to solve the classification problem. Most classical methods in Supervised Learning use the density estimation of data to construct their classifiers.In this dissertation, we show that the topology of data can be a good alternative in constructing classifiers. We propose using topological graphs like Gabriel graphs (GG) and Relative Neighborhood Graphs (RNG) that can build the topology of data based on its neighborhood structure. To apply this concept, we create a new method called Random Neighborhood Classification (RNC).In this method, we use topological graphs to construct classifiers and then apply Ensemble Methods (EM) to get all relevant information from the data. EM is well known in Machine Learning, generates many classifiers from data and then aggregates these classifiers into one. Aggregate classifiers have been shown to be very efficient in many studies, because it leverages relevant and effective information from each generated classifier. We first compare RNC to other known classification methods using data from the UCI Irvine repository. We find that RNC works very well compared to very efficient methods such as Random Forests and Support Vector Machines. Most of the time, it ranks in the top three methods in efficiency. This result has encouraged us to study the efficiency of RNC on real data like tweets. Twitter, a microblogging Social Network, is especially useful to mine opinion on current affairs and topics that span the range of human interest, including politics. Mining political opinion from Twitter poses peculiar challenges such as the versatility of the authors when they express their political view, that motivate this study. We define a new attribute, called couple, that will be very helpful in the process to study the tweets opinion. A couple is an author that talk about a politician. We propose a new procedure that focuses on identifying the opinion on tweet using couples. We think that focusing on the couples's opinion expressed by several tweets can overcome the problems of analysing each single tweet. This approach can be useful to avoid the versatility, language ambiguity and many other artifacts that are easy to understand for a human being but not automatically for a machine.We use classical Machine Learning techniques like KNN, Random Forests (RF) and also our method RNC. We proceed in two steps : First, we build a reference set of classified couples using Naive Bayes. We also apply a second alternative method to Naive method, sampling plan procedure, to compare and evaluate the results of Naive method. Second, we evaluate the performance of this approach using proximity measures in order to use RNC, RF and KNN. The expirements used are based on real data of tweets from the French presidential election in 2012. The results show that this approach works well and that RNC performs very good in order to classify opinion in tweets.Topological Learning seems to be very intersting field to study, in particular to address the classification problem. Many concepts to get informations from topological graphs need to analyse like the ones described by Aupetit, M. in his work (2005). Our work show that Topological Learning can be an effective way to perform classification problem.
533

O problema de minimização de trocas de ferramentas / The minimization of tool switches problem

Andreza Cristina Beezão Moreira 02 September 2016 (has links)
Especialmente nas últimas quatro décadas, muitos estudos se voltaram às variáveis determinantes para a implementação efetiva de sistemas flexíveis de manufatura, tais como seu design, sequenciamento e controle. Neste ínterim, o manejo apropriado do conjunto de ferramentas necessárias para a fabricação de um respectivo lote de produtos foi destacado como fator crucial no desempenho do sistema de produção como um todo. Neste trabalho, abordamos a otimização do número de inserções e remoções de ferramentas no magazine de uma ou mais máquinas numericamente controladas, admitindo-se que uma parcela significativa do tempo de produção é dispensada com estas trocas de ferramentas. De forma mais precisa, a minimização do número de trocas de ferramentas consiste em determinar a ordem de processamento de um conjunto de tarefas, bem como o carregamento ótimo do(s) compartimento(s) de ferramentas da(s) máquina(s), a fim de que o número de trocas seja minimizado. Como demostrado na literatura, mesmo o caso restrito à existência de apenas uma máquina de manufatura (MTSP, do inglês Minimization of Tool Switches Problem) é um problema NP-difícil, o que pode justificar o fato observado de que a maioria dos métodos de solução existentes o abordam de maneira heurística. Consequentemente, concluímos que a extensão ao contexto de múltiplas máquinas é também um problema NP-difícil, intrinsecamente complicado de se resolver. Nosso objetivo consiste em estudar formas eficientes de otimizar o número de trocas de ferramentas em ambientes equipados com máquinas flexíveis de manufatura. Para tanto, abordamos o problema básico, MTSP, e duas de suas variantes, em níveis crescentes de abrangência, que consideram o sequenciamento de tarefas em um conjunto de: (i) máquinas paralelas e idênticas (IPMTC, do inglês Identical Parallel Machines problem with Tooling Constraints); e (ii) máquinas paralelas e idênticas inseridas em um ambiente do tipo job shop (JSSPTC, do inglês Job Shop Scheduling Problem with Tooling Constraints). Classificamos as principais contribuições desta tese com respeito a três aspectos. Primeiramente, empurramos as fronteiras da literatura do MTSP propondo formulações matemáticas para os problemas IPMTC e JSSPTC. Desenvolvemos, também, algoritmos baseados em diferentes técnicas de resolução, como redução de domínio, Path relinking, Adaptive large neighborhood search e a elaboração de regras de despacho. Por último, com o intuito de bem avaliar a eficiência e o alcance de nossos métodos, propomos três novos conjuntos de instâncias teste. Acreditamos, assim, que este trabalho contribui positivamente com pesquisas futuras em um cenário abrangente dentro da minimização das trocas de ferramentas em um sistema flexível de manufatura. / Several studies, especially in the last four decades, have focused on decisive elements for the effective implementation of flexible manufacturing systems, such as their design, scheduling and control. In the meantime, the appropriate management of the set of tools needed to manufacture a certain lot of products has been highlighted as a crucial factor in the performance of the production system as a whole. This work deals with the optimization of the number of insertions and removals from the magazine of one or more numerical controlled machines, assuming that a significant part of the production time is wasted with such tool switches. More precisely, the minimization of tool switches problem (MTSP) consists on determining the processing order of a set of jobs, as well as the optimal loading of the magazine(s) of the machine(s), so that the total number of switches is minimized. As formally demonstrated in the literature, the MTSP is a NP-hard problem even when considering the existence of only one manufacturing machine, which could justify the fact that most of the solution methods tackles it heuristically. We thus conclude that its extension to the case of multiples machines is also NP-hard and, therefore, a problem intrinsically difficult to solve. Our goal consists in studying efficient ways to optimize the number of tool switches in environments equipped with flexible manufacturing machines. For that, we address the basic problem, MTSP, and two MTSP variants, in increasing levels of reach, that consider the job sequencing in a set of: (i) identical parallel machines (Identical Parallel Machines problem with Tooling Constraints, IPMTC); and (ii) identical parallel machines inserted in a job shop environment (Job Shop Scheduling Problem with Tooling Constraints, JSSPTC). The main contributions of this thesis are classified according three aspects. First, we pushed the frontier of the MTSP literature by proposing mathematical formulations for IPMTC and JSSPTC. We also developed algorithms based on different solution techniques, such as domain reduction, Path Relinking, Adaptive Large Neighborhood Search and dispatching rules. Finally, to fully evaluate the effectiveness and limits of our methods, three new sets of benchmark instances were generated. We believe that this work contributes positively to the future of research in a broad scenario inside the minimization of tool switches in flexible manufacturing systems.
534

Planning Resource Requirements in Rail Freight Facilities by Applying Machine Learning

Ruf, Moritz 10 January 2022 (has links)
Diese Dissertation verknüpft eisenbahnbetriebswissenschaftliche Grundlagen mit Methoden aus den Disziplinen Unternehmensforschung (Operations Research) und Maschinelles Lernen. Gegenstand ist die auf den mittelfristigen Zeithorizont bezogene Ressourcenplanung von Knoten des Schienengüterverkehrs, die sogenannte taktische Planung. Diese spielt eine wesentliche Rolle für eine wirtschaftliche und qualitativ hochwertige Betriebsdurchführung. Knoten des Schienengüterverkehrs stellen neuralgische Punkte in der Transportkette von Waren auf der Schiene dar. Sie dienen der Durchführung einer Vielzahl unterschiedlicher betrieblicher Prozesse zur Sicherstellung eines definierten Outputs an Zügen in Abhängigkeit eines jeweils gegebenen Inputs. Die Bereitstellung eines zu den Betriebsanforderungen passenden Ressourcengerüsts ist Teil der taktischen Planung und hat wesentlichen Einfluss auf die Qualität der Prozesse in den Knoten, im Speziellen, sowie auf die vor- und nachgelagerte Transportdurchführung im Allgemeinen. Die Bemessung des notwendigen Personals, der Betriebsmittel und der Infrastruktur für einen Betriebstag, die sogenannte Ressourcendimensionierung, ist in der Praxis geprägt durch einen erheblichen manuellen Aufwand sowie eine große Abhängigkeit von der Datenqualität. Vor diesem Hintergrund und zur Überwindung dieser Nachteile schlägt diese Dissertation ein neues Verfahren zur Ressourcendimensionierung vor. Exemplarisch wird der Fokus auf die großen Knoten des Einzelwagenverkehrs gelegt, die sogenannten Rangierbahnhöfe. In diesen werden Eingangszüge zerlegt, Güterwagen entsprechend ihrer Ausgangsrichtung sortiert und gesammelt, sowie neue Ausgangszüge gebildet und bereitgestellt. Nach dem Stand der Technik werden für die Ressourcendimensionierung mehrere Monate bis wenige Wochen vor der Betriebsdurchführung Rangierarbeitspläne erstellt. Diese umfassen einen detaillierten Arbeitsfolgenplan inklusive Terminierung von Prozessen sowie deren Ressourcenbelegung. Die Rangierarbeitspläne bilden die Grundlage für die Ressourcenanforderung. Aufgrund sich ändernder Nebenbedingungen vor dem Betriebstag und dem stochastischen Charakter der Betriebsprozesse sowohl im Netz als auch in den Knoten können die in der taktischen Planung erstellten Rangierarbeitspläne nur begrenzt für die Durchführung verwendet werden. Als Beispiele sollen das Einlegen von Sonderzügen, Unregelmäßigkeiten bei den Transporten und Witterungsauswirkungen angeführt werden. Der betriebene Planungsaufwand begründet sich in den komplexen Zusammenhängen zwischen den Betriebsprozessen und der größtenteils fehlenden EDV-Unterstützung, was eine Ermittlung der Ressourcendimensionierung bisher erschwert. Die Folge ist eine Diskrepanz zwischen der Datenqualität als Eingangsgröße für die Planung und der Präzision des Rangierarbeitsplans als Ausgangsgröße, was als Konsequenz eine Scheingenauigkeit der Planung und unter Umständen eine Über- oder Unterdimensionierung der Ressourcen mit sich bringt. Das zeigt, dass die Planung verkürzt werden muss und neue Hilfsmittel erforderlich sind. Motiviert durch diese Diskrepanz und den neuen Möglichkeiten, die die Methoden aus den Bereichen des Operations Research und des Maschinellen Lernens bieten, stellt diese Dissertation ein neues Planungsverfahren Parabola bereit. Parabola ermittelt mit geringerem Planungsaufwand und hoher Qualität relevante Kenngrößen für die Ressourcendimensionierung in Knoten des Schienengüterverkehrs. Dies beschleunigt den taktischen Planungsprozess, reduziert Scheingenauigkeiten bei der Ressourcendimensionierung vor der Betriebsdurchführung und orientiert sich daran, wann welche Entscheidungen zuverlässig und genau zu treffen sind. Folglich wird die Detailtiefe der Planung mit der Zuverlässigkeit der Daten in Einklang gebracht. Das in der Dissertation bereitgestellte Planungsverfahren Parabola analysiert eine ausreichend große Anzahl errechneter Rangierarbeitspläne und / oder historischer Betriebsdaten. Das dabei trainierte Regressionsmodell wird anschließend zur Bestimmung des Ressourcengerüsts genutzt. Die Kalibrierung der Regressionsmodelle erfordert hinreichend viele Rangierarbeitspläne. Für deren Erzeugung wird exemplarisch am Beispiel von Rangierbahnhöfen in dieser Dissertation ein ganzheitliches mathematisches lineares Programm entwickelt, das erstmalig sämtliche für die taktische Planung eines Rangierbahnhofs relevanten Entscheidungsprobleme vom Zugeingang bis zum Zugausgang abbildet. Dieses beinhaltet die Definition der Verknüpfung zwischen Eingangs- und Ausgangszügen, sogenannter Wagenübergänge, sowie die Terminierung sämtlicher Betriebsprozesse mit ihrer Zuweisung zu örtlichen Mitarbeitern, Betriebsmitteln und Infrastruktur. Die bestehenden mathematischen Modelle in der bisherigen Literatur beschränken sich lediglich auf Teile dieses Problems. Es folgt die systematische Erzeugung von Problemstellungen, sogenannten Instanzen, zur Generierung eines repräsentativen Testpools. Die Instanzen dieses NP-schweren Problems sind für generische, exakte Lösungsverfahren in akzeptabler Zeit nicht zuverlässig lösbar. Daher wird eine maßgeschneiderte Metaheuristik, konkret ein Verfahren der Klasse Adaptive Large Neighborhood Search (ALNS), entwickelt. Diese bewegt sich durch den Lösungsraum, indem schrittweise mittels mehrerer miteinander konkurrierender Subheuristiken eine vorher gefundene Lösung erst zerstört und anschließend wieder repariert wird. Durch unterschiedliche Charakteristika der Subheuristiken und einer statistischen Auswertung ihres jeweiligen Beitrags zum Lösungsfortschritt, gelingt es der ALNS, sich an das Stadium der Lösungssuche und an die jeweilige Problemstruktur anzupassen. Die in dieser Dissertation entwickelte ALNS erzeugt für realistische Instanzen eines Betriebstages Lösungen in hoher Qualität binnen weniger Minuten Rechenzeit. Basierend auf den erzeugten Rangierarbeitsplänen wurden für die Entwicklung des Planungsverfahrens insgesamt fünf Regressionstechniken getestet, die die Ausgangsgrößen der Pläne – Bedarf an Lokomotiven, Personal und Infrastruktur – prognostizieren. Die vielversprechendsten Ergebnisse werden durch die Methoden Tree Boosting sowie Random Forest erzielt, die in über 90 % der Fälle den Ressourcenbedarf für Personale und Lokomotiven exakt und für Infrastruktur mit einer Toleranz von einem Gleis je Gleisgruppe prognostizieren. Damit ist dieses Regressionsmodell nach ausreichender Kalibrierung entsprechend örtlicher Randbedingungen geeignet, komplexere Planungsverfahren zu ersetzen. Die Regressionsmodelle ermöglichen die Abstrahierung von Mengengerüsten und Leistungsverhalten von Knoten des Schienengüterverkehrs. Daher ist beispielsweise ein konkreter Fahrplan von und zu den Knoten nicht mehr notwendige Voraussetzung für die taktische Planung in Rangierbahnhöfen. Da das Regressionsverfahren aus vielen Rangierarbeitsplänen lernt, verringert sich die Abhängigkeit von einzelnen Instanzen. Durch die Kenntnis von vielen anderen Plänen können robustere Ressourcengerüste prognostiziert werden. Neben dem in dieser Dissertation ausgearbeiteten Anwendungsfall in der taktischen Planung in Knoten des Schienengüterverkehrs, eröffnet das vorgeschlagene neue Planungsverfahren Parabola eine Vielzahl an weiteren Einsatzfeldern. Die Interpretation des trainierten Regressionsmodells erlaubt das tiefgründige Verständnis des Verhaltens von Knoten des Schienengüterverkehrs. Dies ermöglicht ein besseres Verstehen der Engpässe in diesen Knoten sowie die Identifikation relevanter Treiber der Ressourcendimensionierung. Weiter können diese Modelle bei der Erstellung von netzweiten Leistungsanforderungen Berücksichtigung finden. Mit der in dieser Dissertation erfolgten Bereitstellung von Parabola wird durch Nutzung neuartiger Methoden aus dem Operations Research und Maschinellen Lernen das Instrumentarium der eisenbahnbetriebswissenschaftlichen Verfahren und Modelle sinnvoll erweitert. / This dissertation combines the knowledge of railway operations management with methods from operations research and machine learning. It focuses on rail freight facilities, especially their resource planning at a tactical level. The resource planning plays a crucial role for economical operations at high quality. The rail freight facilities represent neuralgic points in the transport chain of goods by rail. Their task is to carry out a multitude of different operational processes to ensure a defined output of trains, depending on a given input. Providing resource requirements appropriate to the amount of work has a significant impact on the quality of the processes in the facilities in particular and on the up- and downstream transport performance in general. The correct dimensioning of resource requirements, which include the necessary staff, locomotives, and infrastructure for an operating day, is characterized by a considerable manual effort and a large dependency on the data accuracy. Against this background and to overcome these drawbacks, this dissertation proposes a new method for resource requirements. The focus is on the large facilities of single wagonload traffic, the so-called classification yards, in which inbound trains are disassembled, railcars are classified according to their outbound direction, and new outbound trains are formed. Nowadays, shunting work plans are created several months to a few weeks before operations. These operating plans comprise a detailed work sequence plan, including process scheduling, and resource allocation. The operating plans form the basis for resource requirements. Due to the changing constraints prior to operations, e.g., the addition of special trains, and the stochastic nature of the operational processes, for instance caused by weather conditions, shunting work plans can only be used for execution to a limited extent. This effort is made for planning due to the complex interdependencies between the operational processes and the predominant lack of IT support, which makes it difficult to determine resource requirements. The result is a discrepancy between the accuracy of the data as an input variable and the precision of the shunting work plan as an output variable. This leads to an illusory precision of the planning and possibly to an oversizing or undersizing of the resources. Hence, planning must be shortened and new tools are required. Motivated by this discrepancy and the new possibilities offered by methods from the _elds of operations research and machine learning, this dissertation provides a new planning method Parabola. Parabola determines with less planning effort and at high quality relevant parameters for resource requirements in rail freight facilities. This accelerates the planning process, reduces illusory precision before operations are carried out and enables decision-making with sufficient reliability due to the data accuracy. Consequently, the level of detail of the planning is harmonized with the reliability of the data. The planning procedure Parabola involves the analysis of numerous calculated operating plans and / or historical operating data. This trains a regression model that can then be used to determine the resource requirements. The calibration of the regression models requires many operating plans. For their generation, an integrated mathematical linear program is developed in this dissertation using the example of classification yards; for the first time, one program covers all relevant decision problems of tactical planning in a classification yard, from the train arrival to the train departure. This includes the definition of the connection between inbound and outbound trains, so-called railcar interchanges, as well as the scheduling of all operational processes with their assignment to local staff, locomotives, and infrastructure. All existing mathematical models in the literature are limited to parts of the problem. Thereafter follows a systematic generation of a test pool of problems named instances. The instances of this NP-hard problem cannot be reliably solved within an acceptable time frame with general-purpose solvers. Therefore, a tailored metaheuristic, namely an adaptive large neighborhood search (ALNS), is developed. It moves through the solution space by first destroying and then repairing a solution stepwise. Several competing subheuristics are available for this purpose. The ALNS combines multiple subheuristics, which have different characteristics and contribute to the solution progress, as determined by statistical evaluation. Consequently, the ALNS successfully adapts to the progress of the solution and to the problem structure. The ALNS, which is developed in this dissertation, generates high-quality solutions for realistic instances of an operating day in a few minutes of computing time. Based on the generated operating plans, five regression methods predicting the output variables of the operating plans – demand for locomotives, staff, and infrastructure – are tested. The most promising results are achieved by the methods tree boosting and random forest, which predict the resource requirements in over 90% of the cases for staff and locomotives accurately and for infrastructure with a tolerance of one track per bowl. Thus, a regression model can replace the more complex planning procedures after sufficient calibration according to local restrictions. The regression models allow the abstraction of quantity structures and performance behavior. Hence, for example, a dedicated timetable is no longer a prerequisite for tactical planning in classification yards. Since regression methods learn from many operating plans, the dependence on individual instances is reduced. By knowing many other plans, the regression model can predict robust resource requirements. In addition to the use case in tactical planning in rail freight facilities, the proposed new planning method Parabola opens a multitude of further _elds of application. By interpreting the trained regression model, the behavior of rail freight facilities can be understood in depth. Under certain circumstances, this allows a better understanding of the bottlenecks in these facilities and the relevant drivers of resource dimensioning. Furthermore, these models have potential applications in the design of network-wide performance requirements. By providing Parabola in this dissertation, the toolbox of railroad management science procedures and models is sensibly extended by using novel methods from operations research and machine learning.
535

Pediatric Cochlear Implant Outcomes in Auditory Neuropathy/Auditory Dys-Synchrony

Eby, Christine A. 07 July 2004 (has links)
No description available.
536

Perceptions of the Police and Fear of Crime: The Role of Neighborhood Social Capital

Williams, Seth Alan 18 November 2015 (has links)
No description available.
537

Neighborhood Satisfaction, Physical and Perceived Characteristics

Hur, Misun 24 December 2008 (has links)
No description available.
538

Measuring locational equity and accessibility of neighborhood parks in Kansas City, Missouri

Besler, Erica L. January 1900 (has links)
Master of Regional and Community Planning / Department of Landscape Architecture/Regional and Community Planning / Jason Brody / Recent research has focused on assessing equity with regards to location of public services and the population served. Instead of equality, equity involves providing services in proportion to need, rather than equal access for everyone. This study uses three commonly identified measures of accessibility (minimum distance, travel cost, and gravity potential) to assess how equitable higher-need residential populations of Kansas City, MO are served by neighborhood parks. Using Census 2000, socio-economic block group data, areas with high population concentrations of African-American and Hispanic populations, as well as areas of high density and low income are characterized as having the most need. However, correlations of higher-need populations with the accessibility measures reveal patterns of equity within the Kansas City. MO study area. Results indicated that while most of the high need population was adequately and equitably served by neighborhood parks, there were still block groups that did not have access to this type of public resource. This research follows methods proposed in previous studies that utilize the spatial mapping and analysis capabilities of ArcGIS and promote the use of these tools for city planners and future park development and decisions.
539

Responding to shock: a collaborative process for the St. Roch neighborhood

Mahoney, J. Liam January 1900 (has links)
Master of Landscape Architecture / Department of Landscape Architecture/Regional and Community Planning / Lee R. Skabelund / Hurricane Katrina displaced many New Orleans residents, leaving in its wake tens of thousands of vacant lots and buildings. In 2010, estimates show that over 57,000 properties lay empty in the city, especially in the poorer neighborhoods. These properties are not contributing to the fabric of the city; in most places, they are a sign of defeat, an eyesore, or a haven for crime. The neighborhood of St. Roch is experiencing the negative effects of these properties day in and day out and from year to year. Almost a quarter of the lots are vacant in the St. Roch neighborhood, leading to crime and creating a nuisance and a blemish on the community. Coupled with the lack of ownership there is an ailing stormwater management infrastructure leading to areas of flooding after routine storms. In addition to these concerns, there is a lack of fresh, inexpensive and accessible food throughout the area. Although St. Roch’s vacant lots have a negative effect on the community, they present a tremendous opportunity. Their dispersal around the neighborhood presents the opportunity to connect them to churches, schools, retail outlets, as well as providing other uses and services to the neighborhood. The thoughtful design of these locations will demonstrate a site-sensitive approach to the local ecology, culture, and economy of the neighborhood. Such design includes the community throughout the entire lifecycle of each site from its planning phase to the end of its use. The primary goal throughout the planning and design process is to foster stewardship for both the landscape and the community as a whole by means of collaborative planning, direct interaction with each site during implementation, and the observation and monitoring of crucial processes throughout a site’s lifecycle. The intent of this project is to apply a participatory framework to the site design process in order to rejuvenate critical areas of the St. Roch neighborhood. This project seeks to demonstrate the need for a collaborative process while allowing for a balance between the experts who help design each site and the community members who take ownership of the renewed parcels.
540

The local food environment and its association with obesity among low-income women across the urban-rural continuum

Ford, Paula Brigid January 1900 (has links)
Doctor of Philosophy / Department of Human Nutrition / David A. Dzewaltowski / The prevalence of obesity within the U.S. has risen dramatically in the past thirty years. Recent changes in food and physical activity environments may contribute to increased obesity prevalence, suggesting that disparities in these environments may be linked to the increased risk of obesity observed in low-income, and racial/ethnic minority women. This dissertation characterizes the local food environment experienced by low-income women who participate in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) in Kansas, evaluates whether characteristics of the local food environment contribute to obesity risk, and examines how these relationships vary across the urban-rural continuum. Chapter One reviews the relevant literature examining the association between obesity and local food environments, and identifies three testable hypotheses that serve as the framework for later chapters. Chapter Two characterizes the local food environment and examines geographic, racial, ethnic, and socioeconomic disparities in the availability of small grocery stores and supermarkets. Chapter Three examines the association between store availability and obesity risk at an individual level among participants in the WIC Program, while Chapter Four utilizes multi-level modeling to examine the relationships between tract deprivation, tract store availability and body mass index (BMI). Significant geographic disparities were observed in the availability of small grocery and supermarkets. Racial and ethnic disparities observed within tracts were not observed when examining store availability in a 1-mile radius around the residence of WIC mothers. The majority of women participating in the WIC program resided within a 1-mile radius of a small grocery store, and micropolitan and metropolitan WIC mothers had a multiplicity of food stores available within a 3-mile radius of residence. Food store availability was associated with increased obesity risk only in micropolitan areas. The availability of food stores did not mediate the association between tract deprivation and BMI, which varied across the urban-rural continuum. Overall, these results suggest that the relationship between local food environments and eating behaviors is complex, that limited store availability does not contribute to increased obesity risk in vulnerable populations, and that the association between local food environments and obesity risk varies across the urban-rural continuum.

Page generated in 0.0654 seconds