Spelling suggestions: "subject:" bnetwork"" "subject:" conetwork""
641 |
Product Introduction with Network ExternalitiesYEH, HSI-CHUAN 28 June 2001 (has links)
none
|
642 |
NONEShe, Jong-Chuan 27 July 2001 (has links)
NONE
|
643 |
The Speech Recognition System using Neural NetworksChen, Sung-Lin 06 July 2002 (has links)
This paper describes an isolated-word and speaker-independent Mandarin digit speech recognition system based on Backpropagation Neural Networks(BPNN). The recognition rate will achieve up to 95%. When the system was applied to a new user with adaptive modification method, the recognition rate will be higher than 99%. In order to implement the speech recognition system on Digital Signal Processors (DSP) we use a neuron-cancellation rule in accordance with BPNN. The system will cancel about 1/3 neurons and reduce 20%¡ã40% memory size under the rule. However, the recognition rate can still achiever up to 85%. For the output structure of the BPNN, we present a binary-code to supersede the one-to-one model. In addition, we use a new ideal about endpoint detection algorithm for the recoding signals. It can avoid disturbance without complex computations.
|
644 |
Using Local Invariant in Occluded Object Recognition by Hopfield Neural NetworkTzeng, Chih-Hung 11 July 2003 (has links)
In our research, we proposed a novel invariant in 2-D image contour recognition based on Hopfield-Tank neural network. At first, we searched the feature points, the position of feature points where are included high curvature and corner on the contour. We used polygonal approximation to describe the image contour. There have two patterns we set, one is model pattern another is test pattern. The Hopfield-Tank network was employed to perform feature matching. In our results show that we can overcome the test pattern which consists of translation, rotation, scaling transformation and no matter single or occlusion pattern.
|
645 |
Flow control techniques for real-time media applications in best-effort networks using fluid modelsKonstantinou, Apostolos 15 November 2004 (has links)
Quality of Service (QoS) in real-time media applications is an area of current interest because of the increasing demand for audio/video, and generally multimedia applications, over best effort networks, such as the Internet. Media applications are transported using the User Datagram Protocol (UDP) and tend to use a disproportionate amount of network bandwidth as they do not perform congestion or flow control. Methods for application QoS control are desirable to enable users to perceive a consistent media quality. This can be accomplished by either modifying current protocols at the transport layer or by implementing new control algorithms at the application layer irrespective of the protocol used at the transport layer.
The objective of this research is to improve the QoS delivered to end-users in real-time applications transported over best-effort packet-switched networks. This is accomplished using UDP at the transport layer, along with adaptive predictive and reactive control at the application layer. An end-to-end fluid model is used, including the source buffer, the network and the destination buffer. Traditional control techniques, along with more advanced adaptive predictive control methods, are considered in order to provide the desirable QoS and make a best-effort network an attractive channel for interactive multimedia applications. The effectiveness of the control methods, is examined using a Simulink-based fluid-level simulator in combination with trace files extracted from the well-known network simulator ns-2. The results show that improvement in real-time applications transported over best-effort networks using unreliable transport protocols, such as UDP, is feasible. The improvement in QoS is reflected in the reduction of flow loss at the expense of flow dead-time increase or playback disruptions or both.
|
646 |
Applications of artificial neural networks in the identification of flow units, Happy Spraberry Field, Garza County, TexasGentry, Matthew David 17 February 2005 (has links)
The use of neural networks in the field of development geology is in its infancy. In this study, a neural network will be used to identify flow units in Happy Spraberry Field, Garza County, Texas. A flow unit is the mappable portion of the total reservoir within which geological and petrophysical properties that affect the flow of fluids are consistent and predictably different from the properties of other reservoir rock volumes (Ebanks, 1987). Ahr and Hammel (1999) further state a highly "ranked" flow unit (i.e. a good flow unit) would have the highest combined values of porosity and permeability with the least resistance to fluid flow. A flow unit may also include nonreservoir features such as shales and cemented layers where combined porosity-permeability values are lower and resistance to fluid flow much higher (i.e. a poor flow unit) (Ebanks, 1987).
Production from Happy Spraberry Field primarily comes from a 100 foot interval of grainstones and packstones, Leonardian in age, at an average depth of 4,900 feet. Happy Spraberry Field is unlike most fields in that the majority of the wells have been cored in the zone of interest. This fact more easily lends the Happy Spraberry Field to a study involving neural networks.
A neural network model was developed using a data set of 409 points where X and Y location, depth, gamma ray, deep resistivity, density porosity, neutron porosity, lab porosity, lab permeability and electrofacies were known throughout Happy Spraberry Field. The model contained a training data set of 205 cases, a verification data set of 102 cases and a testing data set of 102 cases. Ultimately two neural network models were created to identify electrofacies and reservoir quality (i.e. flow units). The neural networks were able to outperform linear methods and have a correct classification rate of 0.87 for electrofacies identification and 0.75 for reservoir quality identification.
|
647 |
Real-time analysis of aggregate network traffic for anomaly detectionKim, Seong Soo 29 August 2005 (has links)
The frequent and large-scale network attacks have led to an increased need for
developing techniques for analyzing network traffic. If efficient analysis tools were
available, it could become possible to detect the attacks, anomalies and to appropriately
take action to contain the attacks before they have had time to propagate across the
network.
In this dissertation, we suggest a technique for traffic anomaly detection based on
analyzing the correlation of destination IP addresses and distribution of image-based
signal in postmortem and real-time, by passively monitoring packet headers of traffic.
This address correlation data are transformed using discrete wavelet transform for
effective detection of anomalies through statistical analysis. Results from trace-driven
evaluation suggest that the proposed approach could provide an effective means of
detecting anomalies close to the source. We present a multidimensional indicator using
the correlation of port numbers as a means of detecting anomalies.
We also present a network measurement approach that can simultaneously detect,
identify and visualize attacks and anomalous traffic in real-time. We propose to
represent samples of network packet header data as frames or images. With such a
formulation, a series of samples can be seen as a sequence of frames or video. Thisenables techniques from image processing and video compression such as DCT to be
applied to the packet header data to reveal interesting properties of traffic. We show that
??scene change analysis?? can reveal sudden changes in traffic behavior or anomalies. We
show that ??motion prediction?? techniques can be employed to understand the patterns of
some of the attacks. We show that it may be feasible to represent multiple pieces of data
as different colors of an image enabling a uniform treatment of multidimensional packet
header data.
Measurement-based techniques for analyzing network traffic treat traffic volume
and traffic header data as signals or images in order to make the analysis feasible. In this
dissertation, we propose an approach based on the classical Neyman-Pearson Test
employed in signal detection theory to evaluate these different strategies. We use both of
analytical models and trace-driven experiments for comparing the performance of
different strategies. Our evaluations on real traces reveal differences in the effectiveness
of different traffic header data as potential signals for traffic analysis in terms of their
detection rates and false alarm rates. Our results show that address distributions and
number of flows are better signals than traffic volume for anomaly detection. Our results
also show that sometimes statistical techniques can be more effective than the NP-test
when the attack patterns change over time.
|
648 |
Impedance matching techniques for ethernet communication systemsKamprath, Richard Alan 17 September 2007 (has links)
In modern local area networks, the communication signals sent from one
computer to another across the lines of transmission are degraded because of reflection at
the receiver. This reflection can be characterized through the impedances of the
transmitter and the receiver, and is defined by the Institute of Electrical and Electronic
Engineers (IEEE) as the S11 return loss. The specifications for S11 return loss in Gigabit
Ethernet are given in terms of magnitude only in the IEEE 802.3 guidelines. This does
not fully take into account, however, the effects of frequency dependant impedances
within the bandwidth of interest. With a range of 30% error in the category 5, or CAT5,
transmission line impedance used in this specification and no further requirements for
individual components within the Gigabit Ethernet port, such as the RJ45 magjack or the
physical layer, the system can easily be out of tolerance for return loss error. A simple
impedance matching circuit could match the CAT5 cable to the physical layer such that
the return loss is minimized and the S21 transmission is maximized.
The first part of the project was commissioned by Dell Computer to characterize
the return loss of all of its platforms. This thesis goes further in the creation of a system
that can balance these two impedances so that the IEEE specification failure rate is
reduced with the lowest implementation cost, size, power and complexity. The return
loss data were used in the second phase of the project as the basis for component ranges
needed to balance the impedance seen at the front of the physical layer to the CAT5 transmission line. Using the ladder network theory, an impedance matching circuit was
created that significantly reduced the S11 return loss in the passband of the equivalent
ladder network. To manage this iterative process, a control loop was also designed.
While this system does not produce the accuracy that a programmable finite impulse
response (FIR) filter could, it does improve performance with relatively minimal cost,
power, area and complexity.
|
649 |
Region-Based Movement for Coverage and Connectivity Maintenance in Wireless Sensor NetworksLin, Mei-zuo 23 July 2008 (has links)
Wireless sensor network consists of a large number of sensors, which are capable of sensing, communication and data processing. In wireless sensor network, predictable or unpredictable death of sensor nodes may cause coverage and connectivity problems of the original network. In order to compensate the loss of coverage and connectivity, we propose a region-based movement scheme that divides the neighboring sensors of the dead sensor into a number of regions. The neighboring sensors are moved to repair the regions respectively by using the least mobility distance, and their existing coverage and connectivity are not jeopardized. Our work has better performance of maintaining coverage and connectivity of the network. By the results, our work can decrease the average mobility distance and coverage deterioration substantially.
|
650 |
Network Security Planning for New Generation Network Service ProvidersHuang, Shao-Chuan 25 July 2009 (has links)
The internet network and e- commerce become more and more popular currently.
Various applications of the network and services already become the indispensable important tools to most enterprises, such as the application of e mail , to establish the entry website of company, installing server to provide employees with information sharing, etc..
As the internet network providing the convenience and business opportunity , as well as e commerce be further developed, all of such IT applications created unbelievable values to enterprises. However, the security of the internet network becomes an endless issues. The external attacks , such as the electronic virus , the worm, special Lip river depends on the hobbyhorse ( Trojan Horse), procedure of back door, spy's software, the network hacker's depend event and activities have never been stopped.
From which, the enterprises suffered with great losses. Therefore, the IT people of company are requested to develop and installed a suitable protection system to guarantee the security of company information assets.
The case company specified in my paper is the biggest ISP in Taiwan. It owns more than three millions of customers. The company also provides its over 20,000 staffs with internal network and management network equipment for conducting routine jobs. The network and information security concerns are more complicated than that of regular commercial companies.
This research will discuss the management & Network Security planning of this company from the structure and system views. Not only to create potential benefit of rigid information Security for existing network, but also to offer IT planning people with valuable reference as they are performing the related works.
|
Page generated in 0.0379 seconds