• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1963
  • 183
  • 182
  • 147
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 16
  • 11
  • 9
  • 7
  • Tagged with
  • 2877
  • 2877
  • 750
  • 637
  • 506
  • 499
  • 393
  • 336
  • 314
  • 300
  • 299
  • 289
  • 288
  • 277
  • 276
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
831

An empirical exploration of virtual community participation: the interpersonal relationship perspective. / CUHK electronic theses & dissertations collection / ProQuest dissertations and theses

January 2006 (has links)
These results have implications for VC organizers as well as VC researchers. For researchers, the interpersonal relationship perspective of VC participation not only offers a comprehensive theoretical framework but also opens a new perspective for future research. / This dissertation contributes to virtual community research by proposing and empirically validating an exploratory theoretical framework from the interpersonal relationship perspective using two interpersonal behavior theories---the Triandis interpersonal behavior model and FIRO (Fundamental Interpersonal Relationship Orientation) to explain two types of VC participation---BOI (Behavior to Obtain Information) and BGI (Behavior to Give Information). Data was collected in three representative Chinese VCs. Data analysis results showed that the two interpersonal relationship theories are effective in explaining VC participation. Specifically, 53% of the variance of BOI and 41% of the variance of BGI are explained by the Triandis model. VC participation habit is found to have the largest positive effect on BOI and BGI. BOI also has a positive effect on BGI. The conclusion from the FIRO theory is that the three dimensions of FIRO---inclusion, control, and affection---constructed in two directions, wanted and expressed, significantly influence VC participation. Wanted and expressed inclusion have positive effects on both BOI and BGI; expressed control has a positive effect on BGI, and wanted control has a positive effect on both BOI and BGI; and expressed affection has a positive effect on BGI, and wanted affection has a positive effect on both BOI and BGI. / Virtual communities (VCs) have emerged as one of the most popular Internet services during the last decade and have been effective tools in knowledge management, customer relationship management, and other business related functions. The growth of VCs is crucial to VC operation, which mainly depends on the members and their participation. Only after the aggregation of a critical mass of members can VCs accumulate invaluable information and diversity to generate revenue for the VC organizers. Thus, understanding of VC participation is of importance to VC organizers. Although VC participation has been explored from diverse perspectives, few studies can offer a comprehensive theoretical framework to explain why people participate in VCs. / Li Honglei. / "September 2006." / Adviser: Siu King Vincent Lai. / Source: Dissertation Abstracts International, Volume: 68-08, Section: A, page: 3482. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 155-169). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest dissertations and theses, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
832

On tracing attackers of distributed denial-of-service attack through distributed approaches. / CUHK electronic theses & dissertations collection

January 2007 (has links)
For the macroscopic traceback problem, we propose an algorithm, which leverages the well-known Chandy-Lamport's distributed snapshot algorithm, so that a set of border routers of the ISPs can correctly gather statistics in a coordinated fashion. The victim site can then deduce the local traffic intensities of all the participating routers. Given the collected statistics, we provide a method for the victim site to locate the attackers who sent out dominating flows of packets. Our finding shows that the proposed methodology can pinpoint the location of the attackers in a short period of time. / In the second part of the thesis, we study a well-known technique against the microscopic traceback problem. The probabilistic packet marking (PPM for short) algorithm by Savage et al. has attracted the most attention in contributing the idea of IP traceback. The most interesting point of this IP traceback approach is that it allows routers to encode certain information on the attack packets based on a pre-determined probability. Upon receiving a sufficient number of marked packets, the victim (or a data collection node) can construct the set of paths the attack packets traversed (or the attack graph), and hence the victim can obtain the locations of the attackers. In this thesis, we present a discrete-time Markov chain model that calculates the precise number of marked packets required to construct the attack graph. / The denial-of-service attack has been a pressing problem in recent years. Denial-of-service defense research has blossomed into one of the main streams in network security. Various techniques such as the pushback message, the ICMP traceback, and the packet filtering techniques are the remarkable results from this active field of research. / The focus of this thesis is to study and devise efficient and practical algorithms to tackle the flood-based distributed denial-of-service attacks (flood-based DDoS attack for short), and we aim to trace every location of the attacker. In this thesis, we propose a revolutionary, divide-and-conquer trace-back methodology. Tracing back the attackers on a global scale is always a difficult and tedious task. Alternatively, we suggest that one should first identify Internet service providers (ISPs) that contribute to the flood-based DDoS attack by using a macroscopic traceback approach . After the concerned ISPs have been found, one can narrow the traceback problem down, and then the attackers can be located by using a microscopic traceback approach. / Though the PPM algorithm is a desirable algorithm that tackles the microscopic traceback problem, the PPM algorithm is not perfect as its termination condition is not well-defined in the literature. More importantly, without a proper termination condition, the traceback results could be wrong. In this thesis, we provide a precise termination condition for the PPM algorithm. Based on the precise termination condition, we devise a new algorithm named the rectified probabilistic packet marking algorithm (RPPM algorithm for short). The most significant merit of the RPPM algorithm is that when the algorithm terminates, it guarantees that the constructed attack graph is correct with a specified level of confidence. Our finding shows that the RPPM algorithm can guarantee the correctness of the constructed attack graph under different probabilities that the routers mark the attack packets and different structures of the network graphs. The RPPM algorithm provides an autonomous way for the original PPM algorithm to determine its termination, and it is a promising means to enhance the reliability of the PPM algorithm. / Wong Tsz Yeung. / "September 2007." / Adviser: Man Hon Wong. / Source: Dissertation Abstracts International, Volume: 69-08, Section: B, page: 4867. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 176-185). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
833

Fair routing for resilient packet rings.

January 2003 (has links)
Li Cheng. / Thesis submitted in: November 2002. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 57-61). / Abstracts in English and Chinese. / Chapter CHAPTER 1 --- INTRODUCTION --- p.1 / Chapter 1.1 --- The Evolution of Ring Network Technologies --- p.1 / Chapter 1.1.1 --- Token Ring Technology --- p.1 / Chapter 1.1.2 --- Resilient Packet Ring Technology --- p.4 / Chapter 1.2 --- Optimal Routing --- p.7 / Chapter 1.3 --- Fairness --- p.8 / Chapter 1.4 --- Outline of Thesis --- p.10 / Chapter CHAPTER 2 --- OPTIMAL ROUTING --- p.11 / Chapter 2.1 --- Throughput Analysis --- p.11 / Chapter 2.2 --- Numerical Results --- p.13 / Chapter CHAPTER 3 --- OPTIMAL FAIR ROUTING --- p.19 / Chapter 3.1 --- Overview --- p.19 / Chapter 3.2 --- Max-min Fair Allocation --- p.19 / Chapter 3.3 --- Proportionally Fair Allocation --- p.32 / Chapter 3.4 --- Numerical Results --- p.33 / Chapter CHAPTER 4 --- TRADEOFF ANALYSIS --- p.40 / Chapter 4.1 --- Tradeoff between Throughput and Max-min Fairness --- p.40 / Chapter 4.2 --- Numerical Results --- p.42 / Chapter 4.3 --- Tradeoff between Throughput and Utility --- p.47 / Chapter 4.4 --- Numerical Results --- p.48 / Chapter CHAPTER 5 --- CONCLUSION --- p.54 / Chapter 5.1 --- Summary --- p.54 / Chapter 5.2 --- Discussion and Future Work --- p.55 / BIBLIOGRAPHY --- p.57
834

On decode-and-forward cooperative systems with errors in relays.

January 2009 (has links)
Mi, Wengang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 80-85). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Path loss and fading channel --- p.2 / Chapter 1.2 --- Relay Channel --- p.4 / Chapter 1.3 --- Power allocation --- p.6 / Chapter 1.4 --- Network coding --- p.8 / Chapter 1.5 --- Outline of the thesis --- p.8 / Chapter 2 --- Background Study --- p.10 / Chapter 2.1 --- Cooperative communication --- p.10 / Chapter 2.1.1 --- User cooperation diversity --- p.11 / Chapter 2.1.2 --- Cooperative diversity --- p.14 / Chapter 2.1.3 --- Coded cooperation --- p.18 / Chapter 2.2 --- Power control and resource allocation in cooperative communication --- p.19 / Chapter 2.3 --- Network coding --- p.21 / Chapter 3 --- Power allocation in DF system --- p.24 / Chapter 3.1 --- Introduction --- p.24 / Chapter 3.2 --- System Model --- p.25 / Chapter 3.3 --- BER analysis with power allocation --- p.27 / Chapter 3.3.1 --- BER analysis of single relay system --- p.27 / Chapter 3.3.2 --- Generalization for N-relay cooperation system --- p.30 / Chapter 3.4 --- Approximation --- p.31 / Chapter 3.5 --- Conclusion --- p.37 / Chapter 4 --- Network coding cooperation --- p.38 / Chapter 4.1 --- Introduction --- p.38 / Chapter 4.2 --- System model --- p.39 / Chapter 4.3 --- Performance analysis --- p.44 / Chapter 4.3.1 --- Network coding cooperation --- p.47 / Chapter 4.3.2 --- Conventional repetition cooperation --- p.48 / Chapter 4.3.3 --- Simulation result --- p.49 / Chapter 4.4 --- More nodes with network coding --- p.52 / Chapter 4.4.1 --- System model: to be selfish or not --- p.53 / Chapter 4.4.2 --- Performance analysis --- p.56 / Chapter 4.4.3 --- Simulation result --- p.62 / Chapter 4.5 --- Further discussion --- p.63 / Chapter 5 --- Conclusion --- p.64 / Chapter A --- Equation Derivation --- p.66 / Chapter A.l --- Proof of proposition 1 --- p.66 / Chapter A.2 --- Generalized solution --- p.68 / Chapter A.3 --- System outage probability of generous scheme --- p.69 / Chapter A.4 --- System outage probability of selfish scheme --- p.74 / Bibliography --- p.79
835

Server's anonymity attack and protection of P2P-Vod systems. / Server's anonymity attack and protection of peer-to-peer video on demand systems

January 2010 (has links)
Lu, Mengwei. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (p. 52-54). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Introduction of P2P-VoD Systems --- p.5 / Chapter 2.1 --- Major Components of the System --- p.5 / Chapter 2.2 --- Peer Join and Content Discovery --- p.6 / Chapter 2.3 --- Segment Sizes and Replication Strategy --- p.7 / Chapter 2.4 --- Piece Selection --- p.8 / Chapter 2.5 --- Transmission Strategy --- p.9 / Chapter 3 --- Detection Methodology --- p.10 / Chapter 3.1 --- Capturing Technique --- p.11 / Chapter 3.2 --- Analytical Framework --- p.15 / Chapter 3.3 --- Results of our Detection Methodology --- p.24 / Chapter 4 --- Protective Architecture --- p.25 / Chapter 4.1 --- Architecture Overview --- p.25 / Chapter 4.2 --- Content Servers --- p.27 / Chapter 4.3 --- Shield Nodes --- p.28 / Chapter 4.4 --- Tracker --- p.29 / Chapter 4.5 --- A Randomized Assignment Algorithm --- p.30 / Chapter 4.6 --- Seeding Algorithm --- p.31 / Chapter 4.7 --- Connection Management Algorithm --- p.33 / Chapter 4.8 --- Advantages of the Shield Nodes Architecture --- p.33 / Chapter 4.9 --- Markov Model for Shield Nodes Architecture Against Single Track Anonymity Attack --- p.35 / Chapter 5 --- Experiment Result --- p.40 / Chapter 5.1 --- Shield Node architecture against anonymity attack --- p.40 / Chapter 5.1.1 --- Performance Analysis for Single Track Anonymity Attack --- p.41 / Chapter 5.1.2 --- Experiment Result on PlanetLab for Single Track Anonymity Attack --- p.42 / Chapter 5.1.3 --- Parallel Anonymity Attack --- p.44 / Chapter 5.2 --- Shield Nodes architecture-against DoS attack --- p.45 / Chapter 6 --- Related Work --- p.48 / Chapter 7 --- Future Work --- p.49 / Chapter 8 --- Conclusion --- p.50
836

Power-Aware Datacenter Networking and Optimization

Yi, Qing 02 March 2017 (has links)
Present-day datacenter networks (DCNs) are designed to achieve full bisection bandwidth in order to provide high network throughput and server agility. However, the average utilization of typical DCN infrastructure is below 10% for significant time intervals. As a result, energy is wasted during these periods. In this thesis we analyze traffic behavior of datacenter networks using traces as well as simulated models. Based on the insight developed, we present techniques to reduce energy waste by making energy use scale linearly with load. The solutions developed are analyzed via simulations, formal analysis, and prototyping. The impact of our work is significant because the energy savings we obtain for networking infrastructure of DCNs are near optimal. A key finding of our traffic analysis is that network switch ports within the DCN are grossly under-utilized. Therefore, the first solution we study is to modify the routing within the network to force most traffic to the smallest of switches. This increases the hop count for the traffic but enables the powering off of many switch ports. The exact extent of energy savings is derived and validated using simulations. An alternative strategy we explore in this context is to replace about half the switches with fewer switches that have higher port density. This has the effect of enabling even greater traffic consolidation, thus enabling even more ports to sleep. Finally, we explore a third approach in which we begin with end-to-end traffic models and incrementally build a DCN topology that is optimized for that model. In other words, the network topology is optimized for the potential use of the datacenter. This approach makes sense because, as other researchers have observed, the traffic in a datacenter is heavily dependent on the primary use of the datacenter. A second line of research we undertake is to merge traffic in the analog domain prior to feeding it to switches. This is accomplished by use of a passive device we call a merge network. Using a merge network enables us to attain linear scaling of energy use with load regardless of datacenter traffic models. The challenge in using such a device is that layer 2 and layer 3 protocols require a one-to-one mapping of hardware addresses to IP (Internet Protocol) addresses. We overcome this problem by building a software shim layer that hides the fact that traffic is being merged. In order to validate the idea of a merge network, we build a simple mere network for gigabit optical interfaces and demonstrate correct operation at line speeds of layer 2 and layer 3 protocols. We also conducted measurements to study how traffic gets mixed in the merge network prior to being fed to the switch. We also show that the merge network uses only a fraction of a watt of power, which makes this a very attractive solution for energy efficiency. In this research we have developed solutions that enable linear scaling of energy with load in datacenter networks. The different techniques developed have been analyzed via modeling and simulations as well as prototyping. We believe that these solutions can be easily incorporated into future DCNs with little effort.
837

Utilising behaviour history and fuzzy trust levels to enhance security in ad-hoc networks

Hallani, Houssein, University of Western Sydney, College of Health and Science, School of Computing and Mathematics January 2007 (has links)
A wireless Ad-hoc network is a group of wireless devices that communicate with each other without utilising any central management infrastructure. The operation of Ad-hoc networks depends on the cooperation among nodes to provide connectivity and communication routes. However, such an ideal situation may not always be achievable in practice. Some nodes may behave maliciously, resulting in degradation of the performance of the network or even disruption of its operation altogether. The ease of establishment, along with the mobility capabilities that these networks offer, provides many advantages. On the other hand, these very characteristics, as well as the lack of any centralised administration, are the root of several nontrivial challenges in securing such networks. One of the key objectives of this thesis is to achieve improvements in the performance of Ad-hoc networks in the presence of malicious nodes. In general, malicious nodes are considered as nodes that subvert the capability of the network to perform its expected functions. Current Ad-hoc routing protocols, such as the Ad-hoc On demand Distance Vector (AODV), have been developed without taking the effects of misbehaving nodes into consideration. In this thesis, to mitigate the effects of such nodes and to attain high levels of security and reliability, an approach that is based on the utilisation of the behaviour history of all member nodes is proposed. The aim of the proposed approach is to identify routes between the source and the destination, which enclose no, or if that is not possible, a minimal number, of malicious nodes. This is in contrast to traditional approaches that predominantly tend to use other criteria such as shortest path alone. Simulation and experimental results collected after applying the proposed approach, show significant improvements in the performance of Ad-hoc networks even in the presence of malicious nodes. However, to achieve further enhancements, this approach is expanded to incorporate trust levels between the nodes comprising the Ad-hoc network. Trust is an important concept in any relation among entities that comprise a group or network. Yet it is hard to quantify trust or define it precisely. Due to the dynamic nature of Ad-hoc networks, quantifying trust levels is an even more challenging task. This may be attributed to the fact that different numbers of factors can affect trust levels between the nodes of Ad-hoc networks. It is well established that fuzzy logic and soft computing offer excellent solutions for handling imprecision and uncertainties. This thesis expands on relevant fuzzy logic concepts to propose an approach to establish quantifiable trust levels between the nodes of Ad-hoc networks. To achieve quantification of the trust levels for nodes, information about the behaviour history of the nodes is collected. This information is then processed to assess and assign fuzzy trust levels to the nodes that make up the Ad-hoc network. These trust levels are then used in the routing decision making process. The performance of an Ad-hoc network that implements the behaviour history based approach using OPtimised NETwork (OPNET) simulator is evaluated for various topologies. The overall collected results show that the throughput, the packet loss rate, and the round trip delay are significantly improved when the behaviour history based approach is applied. Results also show further enhancements in the performance of the Ad-hoc network when the proposed fuzzy trust evaluation approach is incorporated with a slight increase in the routing traffic overhead. Given the improvements achieved when the fuzzy trust approach is utilised, for further enhancements of security and reliability of Ad-hoc networks, future work to combine this approach with other artificial intelligent approaches may prove fruitful. The learning capability of Artificial Neural Networks makes them a prime target for combination with fuzzy based systems in order to improve the proposed trust level evaluation approach. / Doctor of Philosophy (PhD)
838

Sharing network measurements on peer-to-peer networks

Fan, Bo, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
With the extremely rapid development of the Internet in recent years, emerging peer-to-peer network overlays are meeting the requirements of a more sophisticated communications environment, providing a useful substrate for applications such as scalable file sharing, data storage, large-scale multicast, web-cache, and publish-subscribe services. Due to its design flexibility, peer-to-peer networks can offer features including self-organization, fault-tolerance, scalability, load-balancing, locality and anonymity. As the Internet grows, there is an urgent requirement to understand real-time network performance degradation. Measurement tools currently used are ping, traceroute and variations of these. SNMP (Simple Network Management Protocol) is also used by network administrators to monitor local networks. However, ping and traceroute can only be used temporarily, SNMP can only be deployed at certain points in networks and these tools are incapable of sharing network measurements among end-users. Due to the distributed nature of networking performance data, peer-to-peer overlay networks present an attractive platform to distribute this information among Internet users. This thesis aims at investigating the desirable locality property of peer-to-peer overlays to create an application to share Internet measurement performance. When measurement data are distributed amongst users, it needs to be localized in the network allowing users to retrieve it when external Internet links fail. Thus, network locality and robustness are the most desirable properties. Although some unstructured overlays also integrate locality in design, they fail to reach rarely located data items. Consequently, structured overlays are chosen because they can locate a rare data item deterministically and they can perform well during network failures. In structured peer-to-peer overlays, Tapestry, Pastry and Chord with proximity neighbour selection, were studied due to their explicit notion of locality. To differentiate the level of locality and resiliency in these protocols, P2Psim simulations were performed. The results show that Tapestry is the more suitable peer-to-peer substrate to build such an application due to its superior localizing data performance. Furthermore, due to the routing similarity between Tapestry and Pastry, an implementation that shares network measurement information was developed on freepastry, verifying the application feasibility. This project also contributes to the extension of P2Psim to integrate with GT-ITM and link failures.
839

Performance evaluation of ETX on grid based wireless mesh networks

Ni, Xian, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links)
In the past few years Wireless Mesh Networks (WMNs) have developed as a promising technology to provide flexible and low-cost broadband network services. The Expected Transmission Count (ETX) routing metric has been put forward recently as an advanced routing metric to provide high QoS for static WMNs. Most previous research in this area suggests that ETX outperforms other routing metrics in throughput and efficiency. However, it has been determined that ETX is not immune to load sensitivity and route oscillations in a single radio environment. Route oscillations refer to the situation where packet transmission switches between two or more routes due to congestion. This has the effect of degrading performance of the network, as the routing protocol may select a non optimal path. In this thesis we avoided the route oscillation problem by forcing data transmission on fixed routes. This can be implemented in the AODV (Ad hoc On-demand Distance Vector) protocol by disabling both error messages and periodic updating messages (the HELLO scheme). However, a critical factor for our approach is that ETX must determine a high quality initial route in AODV. This thesis investigates whether the ETX metric improves initial route selection in AODV compared to the HOPS metric in two representative client-server applications: the Traffic Control Network (TCN) and the Video Stream (VS) network. We evaluate the ETX and HOPS metrics in a range of scenarios which possess different link qualities and different traffic loads. We find the ETX metric greatly improves initial route selection in AODV compared to the HOPS in the network in which only single flow exists. For networks in which there are multiple simultaneous flows, ETX behaves similar to HOPS in initial route selection. Based on these results, we find the solution of route stabilization to route oscillations in the context of ETX is only useful in the single flow case. To address this problem, we propose a modified solution of repeatedly broadcasting RREQ (Route Request) packets. Simulation results show that our modified solution allows ETX to be useful in the initial route selection in both single flow and multiple simultaneous flows cases.
840

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.

Page generated in 0.0532 seconds