• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 19
  • 11
  • 6
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 147
  • 37
  • 36
  • 32
  • 29
  • 25
  • 24
  • 21
  • 16
  • 15
  • 13
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A Security Management System Design

Onder, Hulusi 01 July 2007 (has links) (PDF)
This thesis analyzes the difficulties of managing the security of an enterprise network. The problem that this thesis study deals with is the central management of a large number and variety of services that provide organization-wide network and information security. This study addresses two problem areas: how to better manage the security of a network, and how to explain the security issues to upper management better. The study proposes a Security Management System (SMS) to be used for network security management, monitoring and reporting purposes. The system is a custom made, central management solution, which combines the critical performance indicators of the security devices and presents the results via web pages.
32

Competition And Collaboration In Service Parts Management Systems

Usta, Mericcan 01 December 2010 (has links) (PDF)
Inventory management policies of two independent dealers in a service parts system with transshipment is studied in this thesis. Dealers can collaborate by pooling inventory or service. Revenue is shared in transshipment, can sometimes be contrary to profit maximization of one of the parties albeit sum of profits is increased. To assess the benefits of inventory pooling under equilibrium strategies, and the effect of competition on profits, a Markov Decision Process is formulated. A simpler variant of the optimal four-index threshold policy is used to characterize the production, service and transshipment related inventory decisions. A game theoretical approach as well as notions from policy iteration is taken to find the best response policy and equilibrium policies of the dealers. Numerical study is conducted to investigate the effect of cost, revenue and demand parameters, as well as dealer asymmetricities on benefit of pooling, service levels and transshipment flows. Analysis shows that commission schemes fairly allocating transshipment value to the players, high customer traffic intensities, and low transshipment costs are most suited environments for pooling. System centralization is beneficial when the inventory holding costs are high, transshipment costs are low, customer traffic intensities are high or the commission structure is distracting a party. Competition, within the experimental settings, dampens about 45% of the benefits of pooling.
33

The Role of Governmental Policies in Nurturing the Pharmaceutical Industry in Brazil: The Mix of Centralized Procurement, Public Drug Production and Public-private Partnerships

SORTE JUNIOR, Waldemiro Francisco 28 March 2012 (has links)
No description available.
34

none

Wang, Hsiu-kai 26 July 2009 (has links)
none
35

Examining the relative costs and benefits of shifting the locus of control in a novel air traffic management environment via multi-agent dynamic analysis and simulation

Bigelow, Matthew Steven 28 June 2011 (has links)
The current air traffic management system has primarily evolved via incremental changes around historic control, navigation, and surveillance technologies. As a result, the system as a whole is not capable of handling air traffic capacities well beyond current levels, despite recent developments, such as ADS-B, that could potentially enable new concepts of operation. Methods of analyzing air traffic for safety and performance have also evolved around current-day operating constructs. Thus, attempts to examine future systems tend to use different analysis methods developed for each. Most notably, questions of 'locus of control' - whether the control should be centralized or de-centralized and distributed - have no common framework by which to judge relative costs and benefits. For instance, a completely centralized control paradigm is commonly asserted to provide an airspace-wide optimal traffic management solution due to a more complete picture of the state of the airspace, whereas a completely decentralized control paradigm is commonly asserted to provide a more user-specific optimal traffic management solution, to distribute the traffic management workload, and potentially be more robust. Given the disparate nature of these assertions and the different types of evaluations commonly used with each, some shared framework must be established to allow comparisons between very different control paradigms. The objective of this thesis was to construct a formal framework to examine the relative costs and benefits of shifting the locus of control in a novel air traffic management environment. This framework provides useful definitions and quantitative measures of flexibility and robustness with respect to various control paradigms ranging between, and including, completely centralized and completely decentralized concepts of operation. Multi-agent dynamic analysis and simulation was used to analyze the range of dynamics found in the different control paradigms. In addition, futuristic air traffic management concepts were developed in sufficient detail to demonstrate the framework. In other words, the objectives were met because the framework was demonstrated to have the ability to identify (or dispel) hypotheses about the relative costs and benefits of locus of control.
36

Radio resource management for wireless indoor communication systems : performance and implementation aspects

Pettersson, Stefan January 2004 (has links)
<p>In this thesis, we investigate several radio resourcemanagement (RRM) techniques and concepts in an indoorenvironment with a dense infrastructure. Future wireless indoorcommunication networks will very likely be implemented atplaces where the user concentration is very high. At these hotspots, the radio resources must be used efficiently. The goalis to identify efficient RRM techniques and concepts that aresuitable for implementation in an indoor environment.</p><p>Handling the high level of co-channel interference is shownto be of paramount importance. Several investigations in thethesis point this out to be the key problem in an indoorenvironment with a dense infrastructure. We show that a locallycentralized radio resource management concept, the bunchconcept, can give a very high performance compared to othercommonly used RRM concepts. Comparisons are made withdistributed systems and systems using channel selection schemeslike CSMA/CA. The comparisons are primarily made by capacityand throughput analysis which are made by system levelsimulations. Results show that the centralized concept can give85 percent higher capacity and 70 percent higher throughputthan any of the compared systems.</p><p>We investigate several RRM techniques to deal with thechannel interference problem and show that beamforming cangreatly reduce the interference and improve the systemperformance. Beamforming, especially sector antennas, alsoreduce the transmitter powers and the necessary dynamic range.A comparison is made between the use of TD/CDMA and pure TDMAwhich clearly shows the performance benefits of usingorthogonal channels that separates the users and reduces theco-channel interference. Different channel selection strategiesare studied and evaluated along with various methods to improvethe capability of system co-existence.</p><p>We also investigate several practical measures to facilitatesystem implementation. Centralized RRM is suitable forguaranteeing QoS but is often considered too complex. With thestudied centralized concept the computational complexity can bereduced by splitting the coverage area into smaller pieces andcover them with one centralized system each. This reduces thecomplexity at the prize of lost capacity due to theuncontrolled interference that the different systems produce.Our investigations show that sector antennas can be used toregain this capacity loss while maintaining high reduction incomplexity. Without capacity loss, the computational complexitycan be reduced by a factor of 40 with sectoring. Theimplementation aspects also include installation sensitivity ofthe indoor architecture and the effect of measurement errors inthe link gains. The robustness against installation errors ishigh but the bunch concept is quite sensitive to largemeasurement errors in the studied indoor environment. Thiseffect can be reduced by additional SIR-margins of the radiolinks.</p><p>The studied bunch concept is shown to be promising for usein future wireless indoor communication systems. It provideshigh performance and is feasible to implement.</p><p><b>Keywords:</b>Radio resource management, indoorcommunication, the bunch concept, centralized RRM, dynamicchannel allocation, channel selection, co-channel interference,power control, feasibility check, capacity, throughput, qualityof service, beamforming, downtilting, sector antennas,co-existence, computational complexity, sensitivity analysis,measurement errors, infrastructure, system implementation,WLAN, HiperLAN/2, IEEE 802.11.</p>
37

Placement of replicas in large-scale data grid environments

Shorfuzzaman, Mohammad 26 March 2012 (has links)
Data Grids provide services and infrastructure for distributed data-intensive applications accessing massive geographically distributed datasets. An important technique to speed access in Data Grids is replication, which provides nearby data access. Although data replication is one of the major techniques for promoting high data access, the problem of replica placement has not been widely studied for large-scale Grid environments. In this thesis, I propose improved data placement techniques useful when replicating potentially large data files in wide area data grids. These techniques are aimed at achieving faster data access as well as efficient utilization of bandwidth and storage resources. At the core of my approach is a new highly distributed replica placement algorithm that places data in strategic locations to improve overall data access performance while satisfying varying user/application and system demands. This improved efficiency of access to large data will improve the practicality of large-scale data and compute intensive collaborative scientific endeavors. My thesis makes several contributions towards improving the state-of-the-art for replica placement in large-scale data grid environments. The major contributions are: (i) development of a new popularity-driven dynamic replica placement algorithm for hierarchically structured data grids that balance storage space utilisation and access latency; (ii) creation of an adaptive version of the base algorithm to dynamically adapt the frequency and degree of replication based on such factors as data request arrival rates, available storage capacities, etc.; (iii) development of a new highly distributed algorithm to determine a near-optimal replica placement while minimizing replication cost (access and update) for a given traffic pattern; (iv) creation of a distributed QoS-aware replica placement algorithm that supports multiple quality requirements both from user and system perspectives to support efficient transfers of large replicas. Simulation results using widely observed data access patterns demonstrate how the effectiveness of my replica placement techniques is affected by various factors such as grid network characteristics (i.e. topology, number of nodes, storage and workload capacities of replica servers, link capacities, traffic pattern), QoS requirements, and so on. Finally, I compare the performance of my algorithms to a number of relevant algorithms from the literature and demonstrate their usefulness and superiority for conditions of interest.
38

Placement of replicas in large-scale data grid environments

Shorfuzzaman, Mohammad 26 March 2012 (has links)
Data Grids provide services and infrastructure for distributed data-intensive applications accessing massive geographically distributed datasets. An important technique to speed access in Data Grids is replication, which provides nearby data access. Although data replication is one of the major techniques for promoting high data access, the problem of replica placement has not been widely studied for large-scale Grid environments. In this thesis, I propose improved data placement techniques useful when replicating potentially large data files in wide area data grids. These techniques are aimed at achieving faster data access as well as efficient utilization of bandwidth and storage resources. At the core of my approach is a new highly distributed replica placement algorithm that places data in strategic locations to improve overall data access performance while satisfying varying user/application and system demands. This improved efficiency of access to large data will improve the practicality of large-scale data and compute intensive collaborative scientific endeavors. My thesis makes several contributions towards improving the state-of-the-art for replica placement in large-scale data grid environments. The major contributions are: (i) development of a new popularity-driven dynamic replica placement algorithm for hierarchically structured data grids that balance storage space utilisation and access latency; (ii) creation of an adaptive version of the base algorithm to dynamically adapt the frequency and degree of replication based on such factors as data request arrival rates, available storage capacities, etc.; (iii) development of a new highly distributed algorithm to determine a near-optimal replica placement while minimizing replication cost (access and update) for a given traffic pattern; (iv) creation of a distributed QoS-aware replica placement algorithm that supports multiple quality requirements both from user and system perspectives to support efficient transfers of large replicas. Simulation results using widely observed data access patterns demonstrate how the effectiveness of my replica placement techniques is affected by various factors such as grid network characteristics (i.e. topology, number of nodes, storage and workload capacities of replica servers, link capacities, traffic pattern), QoS requirements, and so on. Finally, I compare the performance of my algorithms to a number of relevant algorithms from the literature and demonstrate their usefulness and superiority for conditions of interest.
39

Modelagem de processos para a gestão inteligente das informações no controle centralizado do tráfego

Freitas, Julia Lopes de Oliveira January 2014 (has links)
O controle centralizado do tráfego tem como objetivo integrar e gerenciar informações, auxiliando na tomada de decisão em tempo real. Devido ao aumento da complexidade da malha viária, principalmente em grandes cidades, muitos estudos têm focado em sistemas de controle de tráfego, incluindo desenvolvimento de novas tecnologias e ferramentas. Para gerenciar essa complexidade, é necessário que as organizações públicas, onde estão inseridos os Centros de Controle de Tráfego (CCT), conheçam e melhorem seus processos, alinhando e integrando-os aos seus sistemas de informação, de maneira a possibilitar o atendimento das demandas de forma eficiente. Nesse sentido, esta pesquisa baseia-se nos conceitos e práticas do Business Process Management (BPM) com objetivo de propor um modelo de estruturação dos processos para a gestão inteligente das informações no controle centralizado do tráfego. O trabalho, portanto, se desenvolve ao longo das fases e etapas do ciclo do BPM, contemplando desde o planejamento até a modelagem de processos e apresentando como resultado final uma proposta para um plano de otimização de processos. Assim, para atender ao objetivo principal da pesquisa, o trabalho foi desmembrado em três artigos com níveis crescentes de glanuralidade: (i) Inicialmente foi apresentada, baseada em estudo de caso, uma aplicação abrangente da metodologia, percorrendo da primeira à terceira fase do ciclo de BPM em que a empresa está apta para executar os processos e seguir para a quarta fase, Controle e Análise de Dados. (ii) O segundo artigo apresenta o detalhamento da segunda fase do ciclo de BPM, que consiste na modelagem e otimização dos processos. Os processos As Is foram, então, modelados e analisados para que pudesse ser proposta a melhoria na forma de um macroprocesso To Be. (iii) Para embasar a proposição do macroprocesso To Be, foi realizada uma revisão sistemática de literatura, na qual foram compiladas as melhores práticas no tema e verificadas para a realidade brasileira através de entrevista com especialistas. O resultado foi o Mapeamento de Melhores Práticas associado a um Modelo Referência para o Processo de Controle de Tráfego. Em complemento, foram determinadas diretrizes a serem seguidas para a gestão inteligente das informações no controle centralizado do tráfego. Esses resultados definem, portanto, um modelo de referência To Be dos processos de trabalho em CCTs. A principal contribuição teórica desta pesquisa é, então, a consolidação de melhores práticas associadas a um Modelo de Referência do Processo de Controle do Tráfego, ajustado para a realidade dos CCTs brasileiros. Do ponto de vista prático, a metodologia e os resultados apresentados incentivam a implantação de BPM não só em CCTs, mas em qualquer setor de uma organização pública. / The centralization of traffic control aims to integrate and manage information, aiding decision making in real time. Due to the increasing complexity of the road network, especially in large cities, many studies have focused on traffic control systems, including development of new technologies and tools. To manage this complexity, it is necessary that public organizations, where the Traffic Control Centers (TCC) are inserted, understand and improve their processes, aligning and integrating them into their information systems in order to enable the fulfillment of demands efficiently. In this sense, this research is based on the concepts and practices of Business Process Management (BPM) in order to propose a structuring model of processes for the intelligent management of information in centralized traffic control. The work, therefore, is developed through the phases and stages of the BPM cycle, covering from planning to process modeling and presenting as the final result a proposal for a process optimization plan. Thus, to meet the main goal of the research, the work was divided into three articles with increasing levels of detail: (i) initially it was presented, based on case study, a comprehensive application of the methodology, that cover from the first to the third phase of the BPM cycle in which the company is able to run the process and move on to the fourth phase, Control and Data Analysis. (ii) The second article presents the details of the application of second phase of the BPM cycle, which consists of modeling and optimization of processes. The processes "As Is" were modeled and analyzed in order to propose improvements trough a Macro process "To Be". (iii) To support the proposition of macroprocess “To Be”, a systematic review of the literature was applied in order to compile best practices on the subject that were checked for the Brazilian reality through interviews with experts. The result was a Map of Best Practices in association with a Reference Model for Traffic Control Process. In addition to the map, the paper presents guidelines to be followed for the intelligent management of information in centralized traffic control. These results define a reference model "To Be" of work processes in TCCs. From a practical standpoint, the methodology and results presented encourage the deployment of BPM not only in TCCs, but also in any department of a public organization.
40

Evaluation of a Centralized Substation Protection and Control System for HV/MV Substation

Ljungberg, Jens January 2018 (has links)
Today, conventional substation protection and control systems are of a widely distributed character. One substation can easily have as many as 50 data processing points that all perform similar algorithms on voltage and current data. There is also only limited communication between protection devices, and each device is only aware of the bay in which it is installed. With the intent of implementing a substation protection system that is simpler, more efficient and better suited for future challenges, Ellevio AB implemented a centralized system in a primary substation in 2015. It is comprised of five components that each handle one type of duty: Data processing, communication, voltage measurements, current measurements and breaker control. Since its implementation, the centralized system has been in parallel operation with the conventional, meaning that it performs station wide data acquisition, processing and communication, but is unable to trip the station breakers. The only active functionality of the centralized system is the voltage regulation. This work is an evaluation of the centralized system and studies its protection functionality, voltage regulation, fault response and output signal correlation with the conventional system. It was found that the centralized system required the implementation of a differential protection function and protection of the capacitor banks and busbar coupling to provide protection equivalent to that of the conventional system. The voltage regulation showed unsatisfactory long regulation time lengths, which could have been a result of low time resolution. The fault response and signal correlation were deemed satisfactory.

Page generated in 0.0857 seconds