• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 13
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 67
  • 67
  • 29
  • 22
  • 18
  • 17
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Evolving a Genetic Algorithm for Network Flow Maximization

Hafner, Jonathan H. 08 May 2012 (has links)
No description available.
12

Material Flow Optimization And Systems Analysis For Biosolids Management: A Study Of The City Of Columbus Municipal Operations

Sikdar, Kieran Jonah 10 September 2008 (has links)
No description available.
13

Automated Tracking of Mouse Embryogenesis from Large-scale Fluorescence Microscopy Data

Wang, Congchao 03 June 2021 (has links)
Recent breakthroughs in microscopy techniques and fluorescence probes enable the recording of mouse embryogenesis at the cellular level for days, easily generating terabyte-level 3D time-lapse data. Since millions of cells are involved, this information-rich data brings a natural demand for an automated tool for its comprehensive analysis. This tool should automatically (1) detect and segment cells at each time point and (2) track cell migration across time. Most existing cell tracking methods cannot scale to the data with such large size and high complexity. For those purposely designed for embryo data analysis, the accuracy is heavily sacrificed. Here, we present a new computational framework for the mouse embryo data analysis with high accuracy and efficiency. Our framework detects and segments cells with a fully probability-principled method, which not only has high statistical power but also helps determine the desired cell territories and increase the segmentation accuracy. With the cells detected at each time point, our framework reconstructs cell traces with a new minimum-cost circulation-based paradigm, CINDA (CIrculation Network-based DataAssociation). Compared with the widely used minimum-cost flow-based methods, CINDA guarantees the global optimal solution with the best-of-known theoretical worst-case complexity and hundreds to thousands of times practical efficiency improvement. Since the information extracted from a single time point is limited, our framework iteratively refines cell detection and segmentation results based on the cell traces which contain more information from other time points. Results show that this dramatically improves the accuracy of cell detection, segmentation, and tracking. To make our work easy to use, we designed a standalone software, MIVAQ (Microscopic Image Visualization, Annotation, and Quantification), with our framework as the backbone and a user-friendly interface. With MIVAQ, users can easily analyze their data and visually check the results. / Doctor of Philosophy / Mouse embryogenesis studies mouse embryos from fertilization to tissue and organ formation. The current microscope and fluorescent labeling technique enable the recording of the whole mouse embryo for a long time with high resolution. The generated data can be terabyte-level and contains more than one million cells. This information-rich data brings a natural demand for an automated tool for its comprehensive analysis. This tool should automatically (1) detect and segment cells at each time point to get the information of cell morphology and (2) track cell migration across time. However, the development of analytical tools lags far behind the capability of data generation. Existing tools either cannot scale to the data with such large size and high complexity or sacrifice accuracy heavily for efficiency. In this dissertation, we present a new computational framework for the mouse embryo data analysis with high accuracy and efficiency. To make our framework easy to use, we also designed a standalone software, MIVAQ, with a user-friendly interface. With MIVAQ, users can easily analyze their data and visually check the results.
14

An Approach to QoS-based Task Distribution in Edge Computing Networks for IoT Applications

January 2018 (has links)
abstract: Internet of Things (IoT) is emerging as part of the infrastructures for advancing a large variety of applications involving connections of many intelligent devices, leading to smart communities. Due to the severe limitation of the computing resources of IoT devices, it is common to offload tasks of various applications requiring substantial computing resources to computing systems with sufficient computing resources, such as servers, cloud systems, and/or data centers for processing. However, this offloading method suffers from both high latency and network congestion in the IoT infrastructures. Recently edge computing has emerged to reduce the negative impacts of tasks offloading to remote computing systems. As edge computing is in close proximity to IoT devices, it can reduce the latency of task offloading and reduce network congestion. Yet, edge computing has its drawbacks, such as the limited computing resources of some edge computing devices and the unbalanced loads among these devices. In order to effectively explore the potential of edge computing to support IoT applications, it is necessary to have efficient task management and load balancing in edge computing networks. In this dissertation research, an approach is presented to periodically distributing tasks within the edge computing network while satisfying the quality-of-service (QoS) requirements of tasks. The QoS requirements include task completion deadline and security requirement. The approach aims to maximize the number of tasks that can be accommodated in the edge computing network, with consideration of tasks’ priorities. The goal is achieved through the joint optimization of the computing resource allocation and network bandwidth provisioning. Evaluation results show the improvement of the approach in increasing the number of tasks that can be accommodated in the edge computing network and the efficiency in resource utilization. / Dissertation/Thesis / Doctoral Dissertation Computer Engineering 2018
15

Truck Dispatching and Fixed Driver Rest Locations

Morris, Steven Michael 24 August 2007 (has links)
This thesis sets out to analyze how restricting rest (sleep) locations for long-haul truckers may impact operational productivity, given hours-of-service regulations. Productivity in this thesis is measured by the minimum number of unique drivers required to feasibly execute a set of load requests over a known planning horizon. When drivers may stop for rest at any location, they may maximize utilization under regulated driving hours. When drivers may only rest at certain discrete locations, their productivity may be diminished since they may no longer be able to fully utilize available service hours. These productivity losses may require trucking firms to operate larger driver fleets. This thesis addresses two specific challenges presented by this scenario; first, understanding how a given discrete set of rest locations may affect driver fleet size requirements; and second, how to determine optimal discrete locations for a fixed number of rest facilities and the potential negative impact on fleet size of non-optimally located facilities. The minimum fleet size problem for a single origin-destination leg with fixed possible rest locations is formulated as a minimum cost network flow with additional bundling constraints. A mixed integer program is developed for solving the single-leg rest facility location problem. Tractable adaptations of the basic models to handle problems with multiple lanes are also presented. This thesis demonstrates that for typical long-haul lane lengths the effects of restricting rest to a relatively few fixed rest locations has minimal impact on fleet size. For an 18-hour lane with two rest facilities, no increase in fleet size was observed for any test load set instances with exponentially distributed interdeparture times. For test sets with uniformly distributed interdeparture times, additional required fleet sizes ranged from 0 to 11 percent. The developed framework and results should be useful in the analysis of truck transportation of security-sensitive commodities, such as food products and hazardous materials, where there may exist strong external pressure to ensure that drivers rest only in secure locations to reduce risks of tampering.
16

Design And Implementation Of Scheduling And Switching Architectures For High Speed Networks

Sanli, Mustafa 01 October 2011 (has links) (PDF)
Quality of Service (QoS) schedulers are one of the most important components for the end-to-end QoS support in the Internet. The focus of this thesis is the hardware design and implementation of the QoS schedulers, that is scalable for high line speeds and large number of traffic flows. FPGA is the selected hardware platform. Previous work on the hardware design and implementation of QoS schedulers are mostly algorithm specific. In this thesis, a general architecture for the design of the class of Packet Fair Queuing (PFQ) schedulers is proposed. Worst Case Fair Weighted Fair Queuing Plus (WF2Q+) scheduler is implemented and tested in hardware to demonstrate the proposed architecture and design enhancements. The maximum line speed that PFQ algorithms can operate decreases as the number of scheduled flows increases. For this reason, this thesis proposes to aggregate the flows to scale the PFQ architecture to high line speeds. The Window Based Fair Aggregator (WBFA) algorithm that this thesis suggests for flow aggregation provides a tunable trade-off between the efficient use of the available bandwidth and the fairness among the constituent flows. WBFA is also integrated to the hardware PFQ architecture. The QoS support provided by the proposed PFQ architecture and WBFA is measured by conducting hardware experiments on a custom built high speed network testbed which consists of three data processing cards and a backplane. In these experiments, the input traffic is provided by the hardware traffic generator which is designed in the scope of this thesis.
17

Topics in discrete optimization: models, complexity and algorithms

He, Qie 13 January 2014 (has links)
In this dissertation we examine several discrete optimization problems through the perspectives of modeling, complexity and algorithms. We first provide a probabilistic comparison of split and type 1 triangle cuts for mixed-integer programs with two rows and two integer variables in terms of cut coefficients and volume cutoff. Under a specific probabilistic model of the problem parameters, we show that for the above measure, the probability that a split cut is better than a type 1 triangle cut is higher than the probability that a type 1 triangle cut is better than a split cut. The analysis also suggests some guidelines on when type 1 triangle cuts are likely to be more effective than split cuts and vice versa. We next study a minimum concave cost network flow problem over a grid network. We give a polytime algorithm to solve this problem when the number of echelons is fixed. We show that the problem is NP-hard when the number of echelons is an input parameter. We also extend our result to grid networks with backward and upward arcs. Our result unifies the complexity results for several models in production planning and green recycling including the lot-sizing model, and gives the first polytime algorithm for some problems whose complexities were not known before. Finally, we examine how much complexity randomness will bring to a simple combinatorial optimization problem. We study a problem called the sell or hold problem (SHP). SHP is to sell k out of n indivisible assets over two stages, with known first-stage prices and random second-stage prices, to maximize the total expected revenue. Although the deterministic version of SHP is trivial to solve, we show that SHP is NP-hard when the second-stage prices are realized as a finite set of scenarios. We show that SHP is polynomially solvable when the number of scenarios in the second stage is constant. A max{1/2,k/n}-approximation algorithm is presented for the scenario-based SHP.
18

Rychlé rozpoznání aplikačního protokolu / Fast Recognition of Application Protocol

Adámek, Michal January 2012 (has links)
This thesis focuses on methods for fast recognition of application protocols. Fast recognition is recognition with minimal delay from the time of capturing the first data packet sent  from the source node. This thesis describes possible techniques and methods for recognition of application protocols and basic information and description of reference system for lawful interception in computer networks. Furthermore, the thesis describes analysis, design and implementation phase of a tool for fast recognition of application protocols. The conclusion of this thesis describes the results of tests performed by the tool and shows its limitations and possible extensions.
19

Fast Generator of Network Flows / Fast Generator of Network Flows

Budiský, Jakub January 2016 (has links)
Tato diplomová práce se věnuje analýze existujících řešení pro generování síťového provozu určeného k testování síťových komponent. Zaměřuje se na generátory na úrovni IP síťových toků a pokrývá návrh a implementaci generátoru, zvaného FLOR, schopného vytvářet syntetický síťový provoz rychlostí až několik desítek gigabitů za sekundu. K plánování toků využívá náhodného procesu. Vytvořená aplikace je otestována a porovnána s existujícími nástroji. V závěru jsou navrženy další vylepšení a optimalizace.
20

[pt] DESENVOLVIMENTO DE UM MODELO DE OTIMIZAÇÃO PARA O PLANEJAMENTO DE TRENS DE CARGA GERAL / [en] DEVELOPMENT OF AN OPTIMIZATION MODEL FOR GENERAL CARLOAD TRAIN PLANNING

DOUGLAS DOS REIS DUARTE 16 June 2021 (has links)
[pt] O Planejamento de Trens é de grande importância para o transporte de carga geral das ferrovias. O planejamento deve contemplar quais trens irão circular, suas frequências, quais as rotas atendidas e os vagões que irão compor cada trem. Na presente dissertação, é proposto um modelo de programação inteira mista para a otimização do Planejamento de Trens de Carga Geral, buscando minimizar os custos envolvidos na criação e operação dos trens. O modelo foi aplicado em uma ferrovia brasileira de transporte de cargas no planejamento de 12 períodos. O modelo foi rodado com tempo de processamento médio de 15 horas, tempo este considerado aceitável por se tratar de um problema tático que define os trens do próximo período de planejamento. Quando comparado com os dados reais, o modelo gerou uma redução média de 10,1 por cento nos custos de operação dos trens. O planejamento proposto gerou uma melhor utilização das conexões dos vagões para evitar a criação de trens com baixa ocupação, reduzindo assim os custos. Os resultados também proporcionaram aos planejadores de trens da ferrovia uma maior velocidade nas análises, que hoje são realizadas manualmente, possibilitando uma melhor visão de quais trens deveriam ser criados para os perfis de demanda de cada período. / [en] Train Planning is of great importance for the transportation of general carload in railroad. The planning must contemplate which trains should run, their frequencies, which routes will be served and the cars that will compose each train. In this dissertation, a mixed integer programming model is proposed to optimize the planning of general carloads trains, seeking to minimize the costs involved in the creation and operation of the trains. The model was applied to a Brazilian freight railway in the planning of 12 periods. The model was run with an average processing time of 15 hours, a time considered acceptable because it deals with a tactical problem that defines the trains of the next planning period. When compared to the actual data, the model generated an average reduction of 10.1 per cent in the costs of operating the trains. The proposed planning generated a better use of the wagon connections to avoid the creation of trains with low occupancy, thus reducing costs. The results also provided railroad train planners with greater speed in the analyzes, which today are carried out manually, allowing a better view of which trains should be created for the demand profiles of each period.

Page generated in 0.0175 seconds