Spelling suggestions: "subject:"fins"" "subject:"ins""
1 |
Upper Bound Analysis and Routing in Optical Benes NetworksZhong, Jiling 12 January 2006 (has links)
Multistage Interconnection Networks (MIN) are popular in switching and communication applications. It has been used in telecommunication and parallel computing systems for many years. The new challenge facing optical MIN is crosstalk, which is caused by coupling two signals within a switching element. Crosstalk is not too big an issue in the Electrical Domain, but due to the stringent Bit Error Rate (BER) constraint, it is a big major concern in the Optical Domain. In this research dissertation, we will study the blocking probability in the optical network and we will study the deterministic conditions for strictly non-blocking Vertical Stacked Optical Benes Networks (VSOBN) with and without worst-case scenarios. We will establish the upper bound on blocking probability of Vertical Stacked Optical Benes Networks with respect to the number of planes used when the non-blocking requirement is not met. We will then study routing in WDM Benes networks and propose a new routing algorithm so that the number of wavelengths can be reduced. Since routing in WDM optical network is an NP-hard problem, many heuristic algorithms are designed by many researchers to perform this routing. We will also develop a genetic algorithm, simulated annealing algorithm and ant colony technique and apply these AI algorithms to route the connections in WDM Benes network.
|
2 |
Probability of Belonging to a LanguageCook, Kevin Michael Brooks 16 April 2013 (has links) (PDF)
Conventional language models estimate the probability that a word sequence within a chosen language will occur. By contrast, the purpose of our work is to estimate the probability that the word sequence belongs to the chosen language. The language of interest in our research is comprehensible well-formed English. We explain how conventional language models assume what we refer to as a degree of generalization, the extent to which a model generalizes from a given sequence. We explain why such an assumption may hinder estimation of the probability that a sequence belongs. We show that the probability that a word sequence belongs to a chosen language (represented by a given sequence) can be estimated by avoiding an assumed degree of generalization, and we introduce two methods for doing so: Minimal Number of Segments (MINS) and Segment Selection. We demonstrate that in some cases both MINS and Segment Selection perform better at distinguishing sequences that belong from those that do not than any other method we tested, including Good-Turing, interpolated modified Kneser-Ney, and the Sequence Memoizer.
|
Page generated in 0.0366 seconds