Spelling suggestions: "subject:"[een] PROTOCOL"" "subject:"[enn] PROTOCOL""
391 |
The Effectiveness of International Law in Upholding the Rights of RefugeesTran, Wendalyn 01 January 2014 (has links)
The 1951 Convention and 1967 Protocol Relating to the Status of Refugees were established after World War II and are the primary documents that dictate international refugee policy. They were intended to protect the basic human rights of refugees; ensure them safe asylum; protect against refoulement; and provide refugees with basic services and assistance such as food, legal documents, and primary education. Despite the creation of these protective instruments, human rights abuses against refugees continue to be reported as the global refugee crisis worsens, raising into question the effectiveness of the 1951 Convention and 1967 Protocol. In this thesis, Jordan, Tanzania, and Thailand will serve as cases studies for exploring the effectiveness of the current international refugee regime. Both legislation and narratives will be analyzed in order to fully comprehend the context of the situation.
|
392 |
Establishing the protocol validity of an electronic standardised measuring instrument / Sebastiaan RothmannRothmann, Sebastiaan January 2009 (has links)
Over the past few decades, the nature of work has undergone remarkable changes, resulting in a shift from manual demands to mental and emotional demands on employees. In order to manage these demands and optimise employee performance, organisations use well-being surveys to guide their interventions. Because these interventions have a drastic financial implication it is important to ensure the validity and reliability of the results. However, even if a validated measuring instrument is used, the problem remains that wellness audits might be reliable, valid and equivalent when the results of a group of people are analysed, but cannot be guaranteed for each individual. It is therefore important to determine the validity and reliability of individual measurements (i.e. protocol validity). However, little information exists concerning the efficiency of different methods to evaluate protocol validity.
The general objective of this study was to establish an efficient, real-time method/indicator for determining protocol validity in web-based instruments. The study sample consisted of 14 592 participants from several industries in South Africa and was extracted from a work-related well-being survey archive. A protocol validity indicator that detects random responses was developed and evaluated. It was also investigated whether Item Response Theory (IRT) fit statistics have the potential to serve as protocol validity indicators and this was compared to the newly developed protocol validity indicator.
The developed protocol validity indicator makes use of neural networks to predict whether cases have protocol validity. A neural network was trained on a large non-random sample and a computer-generated random sample. The neural network was then cross-validated to see whether posterior cases can be accurately classified as belonging to the random or non-random sample. The neural network proved to be effective in detecting 86,39% of the random responses and 85,85% of the non-random responses correctly. Analyses on the misclassified cases demonstrated that the neural network was accurate because non-random classified cases were in fact valid and reliable, while random classified cases showed a problematic factor structure and low internal consistency. Neural networks proved to be an effective technique for the detection of potential invalid and unreliable cases in electronic well-being surveys.
Subsequently, the protocol validity detection capability of IRT fit statistics was investigated. The fit statistics were calculated for the study population and for random generated data with a uniform distribution. In both the study population and the random data, cases with higher outfit statistics showed problems with validity and reliability. When compared to the neural network technique, the fit statistics suggested that the neural network was more effective in classifying non-random cases than it was in classifying random cases. Overall, the fit statistics proved to be effective indicators of protocol invalidity (rather than validity) provided that some additional measures be imposed.
Recommendations were made for the organisation as well as with a view to future research. / Thesis (M.Sc. (Human Resource Management))--North-West University, Potchefstroom Campus, 2010.
|
393 |
Canada and the Palermo Protocol of 2000 on Human Trafficking: A Qualitative Case Study.Holden, Christie 07 May 2013 (has links)
This study consists of a qualitative analysis on the subject of human trafficking in Canada. It is intended to explore the steps that have been taken to address the Protocol to Prevent, Suppress and Punish Trafficking in Persons, Especially Women and Children, Supplementary Legislation to the Convention Against Transnational Organized Crime (2000c), also known as the Palermo Protocol, and examine Canada’s commitment to changing the international and domestic context in which human trafficking takes place. Through exploration of Canadian legislation, literature and prosecutions presented in Canadian courts between January 2005 and December 2011, this research aims to establish whether Canada has shown a commitment to ending and preventing the problem of human trafficking that is consistent with the Recommended Guidelines published by the office of the United Nations High Commissioner on Human Rights (2002). A nominal coding scheme was used to show in basic terms the level of commitment Canada is showing toward combating the issue of human trafficking, both internationally and domestically. Results indicate that while Canada has met minimum standards by implementing anti-trafficking legislation in 2005 which is consistent with the Palermo Protocol, the country is falling short of commitments to combat human trafficking due to inadequate victim protection measures, lack of standardized data collection procedures and insufficient efforts to combat and prevent the root causes of trafficking.
|
394 |
Establishing the protocol validity of an electronic standardised measuring instrument / Sebastiaan RothmannRothmann, Sebastiaan January 2009 (has links)
Over the past few decades, the nature of work has undergone remarkable changes, resulting in a shift from manual demands to mental and emotional demands on employees. In order to manage these demands and optimise employee performance, organisations use well-being surveys to guide their interventions. Because these interventions have a drastic financial implication it is important to ensure the validity and reliability of the results. However, even if a validated measuring instrument is used, the problem remains that wellness audits might be reliable, valid and equivalent when the results of a group of people are analysed, but cannot be guaranteed for each individual. It is therefore important to determine the validity and reliability of individual measurements (i.e. protocol validity). However, little information exists concerning the efficiency of different methods to evaluate protocol validity.
The general objective of this study was to establish an efficient, real-time method/indicator for determining protocol validity in web-based instruments. The study sample consisted of 14 592 participants from several industries in South Africa and was extracted from a work-related well-being survey archive. A protocol validity indicator that detects random responses was developed and evaluated. It was also investigated whether Item Response Theory (IRT) fit statistics have the potential to serve as protocol validity indicators and this was compared to the newly developed protocol validity indicator.
The developed protocol validity indicator makes use of neural networks to predict whether cases have protocol validity. A neural network was trained on a large non-random sample and a computer-generated random sample. The neural network was then cross-validated to see whether posterior cases can be accurately classified as belonging to the random or non-random sample. The neural network proved to be effective in detecting 86,39% of the random responses and 85,85% of the non-random responses correctly. Analyses on the misclassified cases demonstrated that the neural network was accurate because non-random classified cases were in fact valid and reliable, while random classified cases showed a problematic factor structure and low internal consistency. Neural networks proved to be an effective technique for the detection of potential invalid and unreliable cases in electronic well-being surveys.
Subsequently, the protocol validity detection capability of IRT fit statistics was investigated. The fit statistics were calculated for the study population and for random generated data with a uniform distribution. In both the study population and the random data, cases with higher outfit statistics showed problems with validity and reliability. When compared to the neural network technique, the fit statistics suggested that the neural network was more effective in classifying non-random cases than it was in classifying random cases. Overall, the fit statistics proved to be effective indicators of protocol invalidity (rather than validity) provided that some additional measures be imposed.
Recommendations were made for the organisation as well as with a view to future research. / Thesis (M.Sc. (Human Resource Management))--North-West University, Potchefstroom Campus, 2010.
|
395 |
Energy-Efficient Battery-Aware MAC protocol for Wireless Sensor NetworksNasrallah, Yamen 19 March 2012 (has links)
Wireless sensor networks suffer from limited power resources. Therefore, managing the energy
constraints and exploring new ways to minimize the power consumption during the operation of
the nodes are critical issues. Conventional MAC protocols deal with this problem without
considering the internal properties of the sensor nodes’ batteries. However, recent studies about
battery modeling and behaviour showed that the pulsed discharge mechanism and the charge
recovery effect may have a significant impact on wireless communication in terms of power
saving. In this thesis we propose two battery-aware MAC protocols that take benefit of these
factors to save more energy and to prolong the lifetime of the nodes/network without affecting
the throughput. In both protocols we measure the remaining battery capacity of the node and use
that measurement in the back-off scheme. The first protocol gives the nodes with higher
remaining battery capacity more priority to access the medium, while the other one provides
more medium access priority to the nodes with lower remaining battery capacity. The objective
is to investigate, through simulations, which protocol reduces the power consumption of the
nodes, improve the lifetime of the network, and compare the results with the CSMA-CA
protocol.
|
396 |
SNGF Selected Node Geographic Forwarding Routing Protocol for VANETsVaqar, Sayyid January 2010 (has links)
This thesis presents a protocol for intervehicle communication for use in Vehicular
Ad Hoc Networks (VANET). VANET is a natural extension of mobile ad
hoc networks (MANET) in which the restrictions related to power and mobility
are relaxed. The routing protocols used for MANETs are generally dependent on
the state of the network. With changes in the network topology, routing messages
are generated so that the states of the routers in the network are updated. In
the case of VANETs, in which the level of node mobility is high, message-routing
overhead has serious implications for the scalability and throughput of the routing
protocol.
This thesis introduces criteria that are recommended for use when protocols
are designed for VANET applications and presents the Selected Node Geographic
Forwarding (SNGF) protocol. The SNGF protocol implements controlled flooding
in an efficient manner in order to reduce unnecessary communication overhead.
The protocol has a destination discovery mechanism that allows it to initiate
correspondence between nodes without reliance on static location services. The
protocol avoids formation of clusters by using the concept of selective forwarding,
thus providing the advantages of cluster based approaches without actually
forming one itself. It effectively deals with blind flooding by introducing a comprehensive
retransmission time delay in the nodes. This retransmission delay
favors the nodes in the direction of the destination and prevents other nodes
from retransmitting the same message. The SNGF protocol does not use routing
tables, which require frequent updates in mobile networks, instead it relies on directing the messages to geographic locations which are forwarded by any
available intermediary nodes. The protocol also provides techniques for handling
network fragmentation which can be a frequent problem in vehicular networks.
It is capable of delayed message transmission and multiple route discovery in the
case of the non-availability of the shortest path to the destination.
To evaluate the performance of the SNGF protocol, an extensive study of
mobile networks was conducted using the NS2 simulator. The simulation results
demonstrate the reachability of the protocol, its scalability advantages and its
total independence from location services.
The SNGF protocol allows each participating node to operate independently
of other nodes in the network. Nodes in the network are able to communicate
with other nodes without ever becoming dependent on intermediary nodes. This
feature opens new possibility for individual node based application development
in ad hoc networks. The traffic profiling is described as it would be observed by an
independent node participating in VANET using the SNGF protocol. The node
communicates with other nodes and collects relevant data through the discourse
capability of SNGF. The data collected by the node is viewed as a snapshot in
time of the traffic conditions down the road based upon which future traffic condition
is predicted. Traffic profiling is investigated for different levels of VANET
deployment. The simulation results show that the proposed method of traffic
profiling in a VANET environment using the SNGF protocol is viable for even
lower levels of deployment.
|
397 |
Effectiveness of Implementation of Gastric and Duodenal Ulcer Clinical Protocol in the Kyrgyz RepublicShimarova, Memerian, Nishimura, Akio, Ito, Katsuki, Hamajima, Nobuyuki 01 1900 (has links)
No description available.
|
398 |
Synthesis of orchestrators from service choreographiesMcIlvenna, Stephen January 2009 (has links)
With service interaction modelling, it is customary to distinguish between two types of models: choreographies and orchestrations. A choreography describes interactions within a collection of services from a global perspective, where no service plays a privileged role. Instead, services interact in a peer-to-peer manner. In contrast, an orchestration describes the interactions between one particular service, the orchestrator, and a number of partner services. The main proposition of this work is an approach to bridge these two modelling viewpoints by synthesising orchestrators from choreographies. To start with, choreographies are defined using a simple behaviour description language based on communicating finite state machines. From such a model, orchestrators are initially synthesised in the form of state machines. It turns out that state machines are not suitable for orchestration modelling, because orchestrators generally need to engage in concurrent interactions. To address this issue, a technique is proposed to transform state machines into process models in the Business Process Modelling Notation (BPMN). Orchestrations represented in BPMN can then be augmented with additional business logic to achieve value-adding mediation. In addition, techniques exist for refining BPMN models into executable process definitions. The transformation from state machines to BPMN relies on Petri nets as an intermediary representation and leverages techniques from theory of regions to identify concurrency in the initial Petri net. Once concurrency has been identified, the resulting Petri net is transformed into a BPMN model. The original contributions of this work are: an algorithm to synthesise orchestrators from choreographies and a rules-based transformation from Petri nets into BPMN.
|
399 |
Formal specification of the TCP service and verification of TCP connection management /Han, Bing. Unknown Date (has links)
Using the approach of Coloured Petri nets (CPNs) and automata theory, this thesis shows how to formalise the service provided by the Transmission Control Protocol (TCP) and verify TCP Connection Management, an essential part of TCP. Most of the previous work on modelling and analysing TCP Connection Management is based on early versions of TCP, which are different from the current TCP specification. Also the scope is mainly confined to the connection establishment procedure, while the release procedure is either simplified or omitted from investigation. This thesis extends prior work by verifying a detailed model of TCP Connection Management. In defining the TCP service, the set of service primitives and their sequencing constraints are specified at each service access point. / Thesis (PhDComputerSystemsEng)--University of South Australia, 2004.
|
400 |
A framework for managing the evolving web service protocols in service-oriented architectures.Ryu, Seung Hwan, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
In Service-Oriented Architectures, everything is a service and services can interact with each other when needed. Web services (or simply services) are loosely coupled software components that are published, discovered, and invoked across the Web. As the use of Web services grows, in order to correctly interact with the growing services, it is important to understand the business protocols that provide clients with the information on how to interact with services. In dynamic Web services environments, service providers need to constantly adapt their business protocols for reflecting the restrictions and requirements proposed by new applications, new business strategies, and new laws, or for fixing the problems found in the protocol definition. However, the effective management of such a protocol evolution raises critical problems: one of the most critical issues is to handle instances running under the old protocol when their protocol has been changed. Simple solutions, such as aborting them or allowing them to continue to run according to the old protocol, can be considered, but they are inapplicable for many reasons (e.g., the lose of work already done and the critical nature of work). We present a framework that supports service administrators in managing the business protocol evolution by providing several features, such as a set of change operators allowing modifications of protocols, a variety of protocol change impact analyses automatically determining which ongoing instances can be migrated to the new version of protocol, and data mining techniques inducing a model for classifying ongoing instances migrateable to the new protocol. To support the protocol evolution process, we have also developed database-backed GUI tools on top of our existing system. The proposed approach and tools can help service administrators in managing the evolution of ongoing instances when the business protocols of services with which they are interacting have changed.
|
Page generated in 0.0457 seconds