• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 18
  • 14
  • 14
  • 13
  • 11
  • 11
  • 10
  • 10
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Integration of a GIS and an expert system for freeway incident management

Jonnalagadda, Srikanth 18 September 2008 (has links)
Congestion due to traffic accidents and incidents can be reduced through effective freeway incident management. However, this is plagued by a number of problems and requires a high level of expertise and coordination among the involved personnel. The ill-structured nature of the problem, constantly changing conditions, the number of agencies involved, and the lack of current information often cause errors in decision and response. Under these conditions, there is need for computer based support tools to provide the required decision and information support and aid the entire process by improving coordination and communication. This study focuses on addressing this issue through the development of an Expert-GIS system which integrates the powerful spatial data handling capabilities of a Geographic Information System with the rule based reasoning logic of an Expert System. The system is designed as a Group Decision Support System that provides the required support for both the substance of the problem (decisions) and the agency level interactions that take place. The ability to support the process of response is modeled using a blackboard architecture for the system. The prototype developed fully integrates the software environments of Arc/Info and Nexpert-Object and presents a unified interface, from where different incident management functions can be accessed. A complete spatial database was designed for the Fairfax County in Northern Virginia as a part of this development effort. Decision support is provided through a set of six integrated modules - incident detection and verification, preliminary response, duration estimation, delay calculation, final response plan and diversion planning, and recovery. Coordination and communication were enhanced by ensuring the uniformity of information at different locations using the system, and through a messaging mechanism that informed users about the current status of incident. The prototype system was developed for two hypothetical agencIes called the Traffic Management Center and The Police Control Center. Historical incident cases were use to test these systems and check the accuracy of the database and the rule base. Both the tests and the development effort showed a strong need for established sources of network information, that could be readily incorporated into the database. Given the fact that the system works with real network data, the next phase of research in should focus on the deployment of the system at test sites. User feedback obtained from these tests would then serve as a basis for future enhancements. / Master of Science
22

Podpora IS/IT v SCM automobilového průmyslu / IS/IT support in SCM of automotive industry

Tománek, Martin January 2009 (has links)
Specification and new trends in the automotive industry impose higher requirements on supply chain management. This thesis describes a part of supply chain management -- reverse logistics and circulation of returnable transport units. The main goal and asset of this thesis is to analyse processes of the team managing the circulation of returnable transport units in the British automobile factories Jaguar and Land Rover. The next main aim and asset is to apply the ITIL methodics Incident management for creating and implementing an application which is based on the process analysis and supports informational needs of individual roles in this team.
23

Early warning system for the prediction of algal-related impacts on drinking water purification / Annelie Swanepoel

Swanepoel, Annelie January 2015 (has links)
Algae and cyanobacteria occur naturally in source waters and are known to cause extensive problems in the drinking water treatment industry. Cyanobacteria (especially Anabaena sp. and Microcystis sp.) are responsible for many water treatment problems in drinking water treatment works (DWTW) all over the world because of their ability to produce organic compounds like cyanotoxins (e.g. microcystin) and taste and odour compounds (e.g. geosmin) that can have an adverse effect on consumer health and consumer confidence in tap water. Therefore, the monitoring of cyanobacteria in source waters entering DWTW has become an essential part of drinking water treatment management. Managers of DWTW, rely heavily on results of physical, chemical and biological water quality analyses, for their management decisions. But results of water quality analyses can be delayed from 3 hours to a few days depending on a magnitude of factors such as: sampling, distance and accessibility to laboratory, laboratory sample turn-around times, specific methods used in analyses etc. Therefore the use of on-line (in situ) instruments that can supply real-time results by the click of a button has become very popular in the past few years. On-line instruments were developed for analyses like pH, conductivity, nitrate, chlorophyll-a and cyanobacteria concentrations. Although, this real-time (on-line) data has given drinking water treatment managers a better opportunity to make sound management decisions around drinking water treatment options based on the latest possible results, it may still be “too little, too late” once a sudden cyanobacterial bloom of especially Anabaena sp. or Microcystis sp. enters the plant. Therefore the benefit for drinking water treatment management, of changing the focus from real-time results to future predictions of water quality has become apparent. The aims of this study were 1) to review the environmental variables associated with cyanobacterial blooms in the Vaal Dam, as to get background on the input variables that can be used in cyanobacterial-related forecasting models; 2) to apply rule-based Hybrid Evolutionary Algorithms (HEAs) to develop models using a) all applicable laboratory-generated data and b) on-line measureable data only, as input variables in prediction models for harmful algal blooms in the Vaal Dam; 3) to test these models with data that was not used to develop the models (so-called “unseen data”), including on-line (in situ) generated data; and 4) to incorporate selected models into two cyanobacterial incident management protocols which link to the Water Safety Plan (WSP) of a large DWTW (case study : Rand Water). During the current study physical, chemical and biological water quality data from 2000 to 2009, measured in the Vaal Dam and the 20km long canal supplying the Zuikerbosch DWTW of Rand Water, has been used to develop models for the prediction of Anabaena sp., Microcystis sp., the cyanotoxin microcystin and the taste and odour compound geosmin for different prediction or forecasting times in the source water. For the development and first stage of testing the models, 75% of the dataset was used to train the models and the remaining 25% of the dataset was used to test the models. Boot-strapping was used to determine which 75% of the dataset was to be used as the training dataset and which 25% as the testing dataset. Models were also tested with 2 to 3 years of so called “unseen data” (Vaal Dam 2010 – 2012) i.e. data not used at any stage during the model development. Fifty different models were developed for each set of “x input variables = 1 output variable” chosen beforehand. From the 50 models, the best model between the measured data and the predicted data was chosen. Sensitivity analyses were also performed on all input variables to determine the variables that have the largest impact on the result of the output. This study have shown that hybrid evolutionary algorithms can successfully be used to develop relatively accurate forecasting models, which can predict cyanobacterial cell concentrations (particularly Anabaena sp. and Microcystis sp.), as well as the cyanotoxin microcystin concentration in the Vaal Dam, for up to 21 days in advance (depending on the output variable and the model applied). The forecasting models that performed the best were those forecasting 7 days in advance (R2 = 0.86, 0.91 and 0.75 for Anabaena[7], Microcystis[7] and microcystin[7] respectively). Although no optimisation strategies were performed, the models developed during this study were generally more accurate than most models developed by other authors utilising the same concepts and even models optimised by hill climbing and/or differential evolution. It is speculated that including “initial cyanobacteria inoculum” as input variable (which is unique to this study), is most probably the reason for the better performing models. The results show that models developed from on-line (in situ) measureable data only, are almost as good as the models developed by using all possible input variables. The reason is most probably because “initial cyanobacteria inoculum” – the variable towards which the output result showed the greatest sensitivity – is included in these models. Generally models predicting Microcystis sp. in the Vaal Dam were more accurate than models predicting Anabaena sp. concentrations and models with a shorter prediction time (e.g. 7 days in advance) were statistically more accurate than models with longer prediction times (e.g. 14 or 21 days in advance). The multi-barrier approach in risk reduction, as promoted by the concept of water safety plans under the banner of the Blue Drop Certification Program, lends itself to the application of future predictions of water quality variables. In this study, prediction models of Anabaena sp., Microcystis sp. and microcystin concentrations 7 days in advance from the Vaal Dam, as well as geosmin concentration 7 days in advance from the canal were incorporated into the proposed incident management protocols. This was managed by adding an additional “Prediction Monitoring Level” to Rand Waters’ microcystin and taste and odour incident management protocols, to also include future predictions of cyanobacteria (Anabaena sp. and Microcystis sp.), microcystin and geosmin. The novelty of this study was the incorporation of future predictions into the water safety plan of a DWTW which has never been done before. This adds another barrier in the potential exposure of drinking water consumers to harmful and aesthetically unacceptable organic compounds produced by cyanobacteria. / PhD (Botany), North-West University, Potchefstroom Campus, 2015
24

Early warning system for the prediction of algal-related impacts on drinking water purification / Annelie Swanepoel

Swanepoel, Annelie January 2015 (has links)
Algae and cyanobacteria occur naturally in source waters and are known to cause extensive problems in the drinking water treatment industry. Cyanobacteria (especially Anabaena sp. and Microcystis sp.) are responsible for many water treatment problems in drinking water treatment works (DWTW) all over the world because of their ability to produce organic compounds like cyanotoxins (e.g. microcystin) and taste and odour compounds (e.g. geosmin) that can have an adverse effect on consumer health and consumer confidence in tap water. Therefore, the monitoring of cyanobacteria in source waters entering DWTW has become an essential part of drinking water treatment management. Managers of DWTW, rely heavily on results of physical, chemical and biological water quality analyses, for their management decisions. But results of water quality analyses can be delayed from 3 hours to a few days depending on a magnitude of factors such as: sampling, distance and accessibility to laboratory, laboratory sample turn-around times, specific methods used in analyses etc. Therefore the use of on-line (in situ) instruments that can supply real-time results by the click of a button has become very popular in the past few years. On-line instruments were developed for analyses like pH, conductivity, nitrate, chlorophyll-a and cyanobacteria concentrations. Although, this real-time (on-line) data has given drinking water treatment managers a better opportunity to make sound management decisions around drinking water treatment options based on the latest possible results, it may still be “too little, too late” once a sudden cyanobacterial bloom of especially Anabaena sp. or Microcystis sp. enters the plant. Therefore the benefit for drinking water treatment management, of changing the focus from real-time results to future predictions of water quality has become apparent. The aims of this study were 1) to review the environmental variables associated with cyanobacterial blooms in the Vaal Dam, as to get background on the input variables that can be used in cyanobacterial-related forecasting models; 2) to apply rule-based Hybrid Evolutionary Algorithms (HEAs) to develop models using a) all applicable laboratory-generated data and b) on-line measureable data only, as input variables in prediction models for harmful algal blooms in the Vaal Dam; 3) to test these models with data that was not used to develop the models (so-called “unseen data”), including on-line (in situ) generated data; and 4) to incorporate selected models into two cyanobacterial incident management protocols which link to the Water Safety Plan (WSP) of a large DWTW (case study : Rand Water). During the current study physical, chemical and biological water quality data from 2000 to 2009, measured in the Vaal Dam and the 20km long canal supplying the Zuikerbosch DWTW of Rand Water, has been used to develop models for the prediction of Anabaena sp., Microcystis sp., the cyanotoxin microcystin and the taste and odour compound geosmin for different prediction or forecasting times in the source water. For the development and first stage of testing the models, 75% of the dataset was used to train the models and the remaining 25% of the dataset was used to test the models. Boot-strapping was used to determine which 75% of the dataset was to be used as the training dataset and which 25% as the testing dataset. Models were also tested with 2 to 3 years of so called “unseen data” (Vaal Dam 2010 – 2012) i.e. data not used at any stage during the model development. Fifty different models were developed for each set of “x input variables = 1 output variable” chosen beforehand. From the 50 models, the best model between the measured data and the predicted data was chosen. Sensitivity analyses were also performed on all input variables to determine the variables that have the largest impact on the result of the output. This study have shown that hybrid evolutionary algorithms can successfully be used to develop relatively accurate forecasting models, which can predict cyanobacterial cell concentrations (particularly Anabaena sp. and Microcystis sp.), as well as the cyanotoxin microcystin concentration in the Vaal Dam, for up to 21 days in advance (depending on the output variable and the model applied). The forecasting models that performed the best were those forecasting 7 days in advance (R2 = 0.86, 0.91 and 0.75 for Anabaena[7], Microcystis[7] and microcystin[7] respectively). Although no optimisation strategies were performed, the models developed during this study were generally more accurate than most models developed by other authors utilising the same concepts and even models optimised by hill climbing and/or differential evolution. It is speculated that including “initial cyanobacteria inoculum” as input variable (which is unique to this study), is most probably the reason for the better performing models. The results show that models developed from on-line (in situ) measureable data only, are almost as good as the models developed by using all possible input variables. The reason is most probably because “initial cyanobacteria inoculum” – the variable towards which the output result showed the greatest sensitivity – is included in these models. Generally models predicting Microcystis sp. in the Vaal Dam were more accurate than models predicting Anabaena sp. concentrations and models with a shorter prediction time (e.g. 7 days in advance) were statistically more accurate than models with longer prediction times (e.g. 14 or 21 days in advance). The multi-barrier approach in risk reduction, as promoted by the concept of water safety plans under the banner of the Blue Drop Certification Program, lends itself to the application of future predictions of water quality variables. In this study, prediction models of Anabaena sp., Microcystis sp. and microcystin concentrations 7 days in advance from the Vaal Dam, as well as geosmin concentration 7 days in advance from the canal were incorporated into the proposed incident management protocols. This was managed by adding an additional “Prediction Monitoring Level” to Rand Waters’ microcystin and taste and odour incident management protocols, to also include future predictions of cyanobacteria (Anabaena sp. and Microcystis sp.), microcystin and geosmin. The novelty of this study was the incorporation of future predictions into the water safety plan of a DWTW which has never been done before. This adds another barrier in the potential exposure of drinking water consumers to harmful and aesthetically unacceptable organic compounds produced by cyanobacteria. / PhD (Botany), North-West University, Potchefstroom Campus, 2015
25

Case study : the experience of managers : the how of organisational learning after patient incidents in a hospital

Mok, Yin Shan Joyce January 2009 (has links)
This case study describes the learning capability of a hospital after patient incidents. The theoretical framework is based on Carroll, Rudolph and Hatakenaka’s model of four stages of organisational learning. Ten managers were interviewed and documents such as incident management policy, quality plans and incident reports were examined. The ten participants include five clinical managers who are responsible for investigating incidents and five unit managers who are responsible for signing off incident reports. This study found that incident investigations generated valuable learning for the participants. Being the learning agent, they also appeared to influence and lead team learning and, to some extent, organisational learning. Most of the participants appeared to be practising between the constrained stage and the open stage of learning. This study uncovers the concepts of preparedness, perception and persistence. The application of these exemplary concepts has strengthened the learning capability of some participants and distinguishes them as practising at the open stage of learning. By employing these concepts, The Hospital can also gain leverage to progress from the constrained stage to the open stage of learning that supports a systems approach, advocates double-loop learning and facilitates the culture of safety. This case study has found that The Hospital assumes a controlling-orientation to ensure staff’s compliance with policies and procedures to prevent patient incidents. However, it also advocates a safety culture and attempts to promote learning from patient incidents. This impetus is inhibited by the obstacles in its incident management system, the weak iii modes of transfer of learning and hindering organisational practices. Three propositions are offered to overcome these barriers. Firstly, revolutionise the incident management system to remove obstacles due to the rigid format of Incident Forms, the difficulty in retrieving information and the lack of feedback. Secondly, provide regular, safe, transparent and egalitarian forums for all staff to learn from patient incidents. Facilitated incident meetings have been shown to be more effective platforms for learning than a bureaucratic approach via policies, procedures, training and directive decisions delivered during departmental meetings or by written communications. Thirdly, attain a balance between controlling and learning to mitigate the effects of bureaucratic process and the silo phenomenon.
26

A Benefit-Cost Analysis of a State Freeway Service Patrol: A Florida Case Study

Singh, Harkanwal Nain 29 March 2006 (has links)
The Road Ranger program is a freeway service patrol (FSP) designed to assist disabled vehicles along congested freeway segments and relieve peak period non-recurring congestion through quick detection, verification and removal of freeway incidents in Florida. It consists of approximately 88 vehicles in fleet and provides free service to about 918 centerline miles. The program is funded by the Florida Department of Transportation (FDOT) and its partners, and is bid out to private contractors. The objective of this study is to examine and evaluate the benefits of the Road Ranger service against their operating costs in five of the seven FDOT Districts and Florida Turnpike Enterprise. The five Districts were chosen due to the availability of Road Ranger program data and activity logs for analysis. The Road Ranger program provides direct benefits to the general public in terms of reduced delay, fuel consumption, air pollution and improved safety and security. The benefits would be expected to be more significant during the peak period when demand reaches or exceeds capacity than in the off-peak and the mid-day period where capacity may not be as significant an issue. The costs considered in this analysis include costs of administration, operation, maintenance, employee salaries, and overhead costs. Incident data were obtained from the daily logs maintained by the Road Ranger service provider containing important information about the time, duration, location, and type of service provided. Other data collected for this study include average daily traffic volume, geometric characteristics of the freeways, unit cost of Road Ranger service, etc. The Freeway Service Patrol Evaluation (FSPE) model developed by the University of California-Berkley was calibrated and used to estimate the benefit-cost ratio for the Road Ranger program. The estimated benefit/cost ratios based on delay and fuel savings indicate that the Road Ranger program produces significant benefits in all the five Districts and Turnpike. The range of benefit-cost ratio of the Road Ranger program in different districts is from 2.3:1 to 41.5:1. The benefit -cost ratio of the entire Road Ranger program is estimated to be in excess of 25:1.
27

Emergency communications management : analysis and application

Sherbert, Nicole Elizabeth 24 November 2010 (has links)
Adopted in 2003, the National Incident Management System is the nation’s first standardized management system unifying the actions of all levels of governments during a large-scale emergency response. It sets the standard for interagency coordination and communication in the event of an emergency. This professional report seeks to produce a working, NIMS-compliant emergency communication plan for the City of Austin, Texas. The report begins with an explanation of NIMS, focusing on the national protocols for interagency communication and public information. It then presents a case study of emergency communications in practice, examining two firestorms in San Diego County, California that occurred four years apart – prior to and after the County’s implementation of NIMS communications protocols. The report synthesizes best practices in emergency communications – from both NIMS research and the San Diego case study – to create the City of Austin Public Information and Emergency Communication Plan, an operational guide that fully utilizes the tools and organizational structure of all City departments, including the City’s Communications and Public Information Office. / text
28

Configuration management data base in an information and communication technology environment / T.J. Medupe.

Medupe, Tsietsi Jacob January 2009 (has links)
There are more requirements for business to be able to run its operations successfully in terms of legal compliance and revenue streams optimisations. Businesses are placing high demands on Information and Communication Technology (ICT) to adapt to changing conditions. However, ICT organisations tasked with providing increased service levels at lower costs do not have the resources to reinvent itself with every technological or regulatory change. Without frameworks in place to leverage automation and best practices, these ICT, organisations are consumed with the day-to-day operations of ICT with little time and few resources left to develop new services that add value to the business. There is, therefore, a definite requirement for a central repository system in order to enhance ICT service delivery and strategy for continuing to improve service, lowering per-service delivery costs and enabling ICT organisations to bring new services that support competitive advantages. The company of choice in the study is Sentech, which has recently adopted some of the Information Technology Infrastructure Library (ITIL) processes; these are service desk, incident management, and change management. The company is still in the middle of deciding on whether to implement the configuration management process which will eventually lead to Configuration Management Database (CMDB). This study attempted to indicate the role and the importance of running the CMDB together with the rest of other ITI L processes. The study also indicated how the other processes cannot function effectively without a proper CMDB platform. The primary objective of the study was to identify the importance and the role of CMDB in an ICT environment. The organisation implemented a number of processes such as configuration and change management. To be successful with using the ITIL change management process, it is important that the people, processes, and technologies work together in a coordinated manner to overcome the political roadblocks that usually inhibit cooperation between groups in the same organisation. The study has indicated that the current ITIL processes, such as change management are not achieving the required results due to a lack of proper CMDB. General recommendations on the implementation of the CMOB based on the study were: •Get executive and Board of Directors' support on the implementation of CMOB. •The organisation needs to redefine the role of the General Manager - ICT to a more appropriate role of Chief Information Officer reporting directly to the board. •The organisation must define detailed business processes and procedures. •The organisation must set a clear scope of the CMOB. •The relevant stakeholders on the CMOB must be identified . •A full state of the current ICT processes must be determined. •The business case on the CMOS must be formulated and documented. •Set goals on what the CMOB will have to achieve. •The organisation must create a plan on the implementation of the CMOB. •Identify responsibilities on maintaining the CMOS. •Create awareness within the organisation around CMOB. •Training on CMOB must be offered to the personnel. •The organisation must baseline all ICT assets. •Plan for ongoing management of the CMOB. It is believed that the objective of the study has been met. From the investigation, it has been clear that there is a dire need for an implementation of a central repository system like the CMOS to support other service delivery and support processes. If the recommendations are implemented within Sentech, the company will secure a more effective and efficient service delivery on the ICT platform. Furthermore, Sentech can become an ICT leader and gain a competitive advantage over its fellow competitors. / Thesis (M.B.A.)--North-West University, Vaal Triangle Campus, 2010.
29

Configuration management data base in an information and communication technology environment / T.J. Medupe.

Medupe, Tsietsi Jacob January 2009 (has links)
There are more requirements for business to be able to run its operations successfully in terms of legal compliance and revenue streams optimisations. Businesses are placing high demands on Information and Communication Technology (ICT) to adapt to changing conditions. However, ICT organisations tasked with providing increased service levels at lower costs do not have the resources to reinvent itself with every technological or regulatory change. Without frameworks in place to leverage automation and best practices, these ICT, organisations are consumed with the day-to-day operations of ICT with little time and few resources left to develop new services that add value to the business. There is, therefore, a definite requirement for a central repository system in order to enhance ICT service delivery and strategy for continuing to improve service, lowering per-service delivery costs and enabling ICT organisations to bring new services that support competitive advantages. The company of choice in the study is Sentech, which has recently adopted some of the Information Technology Infrastructure Library (ITIL) processes; these are service desk, incident management, and change management. The company is still in the middle of deciding on whether to implement the configuration management process which will eventually lead to Configuration Management Database (CMDB). This study attempted to indicate the role and the importance of running the CMDB together with the rest of other ITI L processes. The study also indicated how the other processes cannot function effectively without a proper CMDB platform. The primary objective of the study was to identify the importance and the role of CMDB in an ICT environment. The organisation implemented a number of processes such as configuration and change management. To be successful with using the ITIL change management process, it is important that the people, processes, and technologies work together in a coordinated manner to overcome the political roadblocks that usually inhibit cooperation between groups in the same organisation. The study has indicated that the current ITIL processes, such as change management are not achieving the required results due to a lack of proper CMDB. General recommendations on the implementation of the CMOB based on the study were: •Get executive and Board of Directors' support on the implementation of CMOB. •The organisation needs to redefine the role of the General Manager - ICT to a more appropriate role of Chief Information Officer reporting directly to the board. •The organisation must define detailed business processes and procedures. •The organisation must set a clear scope of the CMOB. •The relevant stakeholders on the CMOB must be identified . •A full state of the current ICT processes must be determined. •The business case on the CMOS must be formulated and documented. •Set goals on what the CMOB will have to achieve. •The organisation must create a plan on the implementation of the CMOB. •Identify responsibilities on maintaining the CMOS. •Create awareness within the organisation around CMOB. •Training on CMOB must be offered to the personnel. •The organisation must baseline all ICT assets. •Plan for ongoing management of the CMOB. It is believed that the objective of the study has been met. From the investigation, it has been clear that there is a dire need for an implementation of a central repository system like the CMOS to support other service delivery and support processes. If the recommendations are implemented within Sentech, the company will secure a more effective and efficient service delivery on the ICT platform. Furthermore, Sentech can become an ICT leader and gain a competitive advantage over its fellow competitors. / Thesis (M.B.A.)--North-West University, Vaal Triangle Campus, 2010.
30

Gerência pró-ativa de incidentes de segurança através da quantificação de dados e da utilização de métodos estatísticos multivariados

Amaral, Érico Marcelo Hoff do 08 March 2010 (has links)
Its recognized that in the organizations the information has become an asset of paramount importance. Analyzing this trend, one realizes that, in the same way, the evolution and dynamics of the threats and security incidents on this asset is an indisputable fact. Moreover, its essential that those responsible for organizational management in the companies strive to monitor the incidents related to the area of Information Technology (IT), acting in a timely manner about these issues, treating them in a proactive and intelligent, way allowing the accurate and rapid decisions, aimed at ensuring continuity of business. This paper presents a tool for incident management related to it service and systems, called SDvPC (Service Desk via Portal Corporativo), which includes a Service Desk to a corporate portal and provides in a centralized way, formal procedures for reporting and scheduling the problems identified by users in an organization. The tool helps to ensure that the weaknesses of the organization are reported in a quickly and simply, as soon as possible, and allows proactive management of incidents in the area of IT to explore the quantification of qualitative data collected in the Service Desk and the grouping of incidents using multivariate analysis. As a result, SDvPC allows the tacit knowledge of the failures, shortcomings and difficulties attached to the IT services and systems in an organizational environment, providing vision and strategic planning on the activities of the support area. / É reconhecido que no cenário atual das organizações a informação tornou-se um ativo de suma importância. Analisando esta tendência, percebe-se que, da mesma forma, a evolução e o dinamismo das ameaças e incidentes de segurança sobre este ativo é um fato incontestável. É fundamental que os responsáveis pela gestão organizacional das empresas envidem esforços para monitorar os incidentes relacionados à área de Tecnologia da Informação (TI), atuando de forma pontual sobre esses problemas, tratando-os de maneira pró-ativa e inteligente, possibilitando assim a tomada de decisões precisas e rápidas, objetivando a garantia de continuidade do negócio. Esta dissertação apresenta uma ferramenta de gestão de incidentes relacionados a serviços e sistemas de TI, denominada SDvPC (Service Desk via Portal Corporativo), que integra um sistema de Service Desk a um Portal Corporativo e disponibiliza, de forma centralizada, procedimentos formais de reporte e escalonamento dos problemas identificados pelos usuários em uma organização. O SDvPC auxilia para que as fragilidades da organização sejam notificadas de maneira ágil e simples, tão logo quanto possível, e permite a gestão próativa de incidentes na área de TI ao explorar a quantificação dos dados qualitativos coletados no Service Desk e o agrupamento dos incidentes através da análise multivariada. Como resultado, esta ferramenta possibilita o conhecimento tácito da falhas, deficiências e dificuldades agregadas aos serviços e sistemas de TI em um ambiente organizacional, possibilitando a visão e o planejamento estratégico sobre as atividades da área de suporte.

Page generated in 0.1264 seconds