• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 8
  • 4
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 54
  • 20
  • 7
  • 7
  • 7
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Recognition, generation, and application of binary matrices with the consecutive ones property

Dom, Michael January 2009 (has links)
Zugl.: Jena, Univ., Diss., 2009
2

A corpus-assisted study on modal verbs in consecutive interpreting

He, Yuan, William January 2018 (has links)
University of Macau / Faculty of Arts and Humanities. / Department of English
3

Weighted consecutive ones problems

Oswald, Marcus. January 2003 (has links) (PDF)
Heidelberg, Univ., Diss., 2003. / Computerdatei im Fernzugriff.
4

Weighted consecutive ones problems

Oswald, Marcus. January 2003 (has links) (PDF)
Heidelberg, University, Diss., 2003.
5

Motion compensation for image compression : pel-recursive motion estimation algorithm

Reza-Alikhani, Hamid-Reza January 2002 (has links)
In motion pictures there is a certain amount of redundancy between consecutive frames. These redundancies can be exploited by using interframe prediction techniques. To further enhance the efficiency of interframe prediction, motion estimation and compensation, various motion compensation techniques can be used. There are two distinct techniques for motion estimation block matching and pel-recursive block matching has been widely used as it produces a better signal-to-noise ratio or a lower bit rate for transmission than the pel-recursive method. In this thesis, various pel-recursive motion estimation techniques such as steepest descent gradient algorithm have been considered and simulated.
6

Sums of Squares of Consecutive Integers

January 2010 (has links)
abstract: ABSTRACT This thesis attempts to answer two questions based upon the historical observation that 1^2 +2^2 +· · ·+24^2 = 70^2. The first question considers changing the starting number of the left hand side of the equation from 1 to any perfect square in the range 1 to 10000. On this question, I attempt to determine which perfect square to end the left hand side of the equation with so that the right hand side of the equation is a perfect square. Mathematically, Question #1 can be written as follows: Given a positive integer r with 1 less than or equal to r less than or equal to 100, find all nontrivial solutions (N,M), if any, of r^2+(r+1)^2+···+N^2 =M^2 with N,M elements of Z+. The second question considers changing the number of terms on the left hand side of the equation to any fixed whole number in the range 1 to 100. On this question, I attempt to determine which perfect square to start the left hand side of the equation with so that the right hand side of the equation is a perfect square. Mathematically, Question #2 can be written as follows: Given a positive integer r with 1 less than or equal to r less than or equal to 100, find all solutions (u, v), if any, of u^2 +(u+1)^2 +(u+2)^2 +···+(u+r-1)^2 =v^2 with u,v elements of Z+. The two questions addressed by this thesis have been on the minds of many mathematicians for over 100 years. As a result of their efforts to obtain answers to these questions, a lot of mathematics has been developed. This research was done to organize that mathematics into one easily accessible place. My findings on Question #1 can hopefully be used by future mathematicians in order to completely answer Question #1. In addition, my findings on Question #2 can hopefully be used by future mathematicians as they attempt to answer Question #2 for values of r greater than 100. / Dissertation/Thesis / M.A. Mathematics 2010
7

Is working memory working in consecutive interpreting?

Jin, Ya-shyuan January 2010 (has links)
It is generally agreed that language interpreting is cognitively demanding; how- ever, to date there is little evidence to indicate how working memory is involved in the task, perhaps due to methodological limitations. Based on a full considera- tion of key components of interpreting, two series of experiments were conducted to explore how working memory might play a role in discourse and sentence inter- preting. If working memory is implicated both in grammatical encoding into the target language, and in temporary storage of the discourse content, then higher demand in one function might compromise the other. Thus discourses that di er in word orders between languages could increase the processing load and leave less resource for memory maintenance, a ecting recall performance. In Experiment 1, Chinese-English bilingual participants' memory performance was compared when they translated passages from Chinese to English and from English to Chinese, where the expected word order was either congruent or incongruent between source and target. Recall was not sensitive to word order or direction of translation. Per- haps surprisingly, memory for incongruent discourses was numerically better than that for congruent sentences. Experiment 2 showed that interpreting trainees per- formed just like the participants in Experiment 1 did, suggesting that memory performance was not modulated by translation direction in pro cient translators. Experiment 3 explored the relationship between surface form transformation and recall. As discourse paraphrasing did not result in better recall than verbatim recall, it was concluded that the better memory performance for incongruent discourse in- terpreting suggested by Experiment 1 was not the result of active manipulation of word form or word order in interpreting. Finally, a free recall task among native English speakers showed that the incongruent discourses tested in earlier experi- ments were intrinsically more memorable than congruent discourses (Experiment 4). Despite this confound, this series of experiments highlighted the importance of comprehension in interpreting, but it did not rule out the role of working memory in the task. The role of working memory in interpreting was further explored using on-line measures in Experiments 5-8. Experiment 5 replicated a self-paced reading study by Ruiz, Paredes, Macizo, and Bajo (2008), comparing participants’ times to read sentences for translation to those to read them normally. The data showed that participants accessed lexical and syntactic properties of a target language in the reading-for-translation condition when resources were available to them. In order to explore the role of working memory in sentence interpreting, a dual-task paradigm was used in Experiment 6. When participants' working memory was occupied by a secondary task (digit preload), reading times were only different numerically between congruent and incongruent sentences. Crucially, reading times decreased as digit preload increased. Since there were no differences in the interpretations produced or in digit recall, it appears that participants were flexible in their resource allocation, suggesting that processing can be tuned up to optimise performance for concurrent tasks. Experiment 7 refined the procedure in the order of responses for the dual tasks but replicated the results of Experiment 6. A closer examination of participants’ interpretation responses showed that devices that could reduce processing load in target language production may have been strategically employed. Finally, another set of sentences were used in Experiment 8 in an attempt to replicate Experiment 5. A failure to replicate the earlier findings suggested that working memory demand might differ for different syntactic structures in sentence interpreting. All in all, this thesis shows that research in language interpreting benefits by taking a full account of the key components of interpreting. The use of on-line measures allowed us to take a ne-grained approach to the investigation of interpretation processes. It is proposed in this thesis that interpreting research may gain more insight from the data by incorporating some of the theories and methods typically used in research into language production.
8

A comparison of the Effects of Different Sizes of Ceiling Rules on the Estimates of Reliability of a Mathematics Achievement Test

Somboon Suriyawongse 05 1900 (has links)
This study compared the estimates of reliability made using one, two, three, four, five, and unlimited consecutive failures as ceiling rules in scoring a mathematics achievement test which is part of the Iowa Tests of Basic Skill (ITBS), Form 8. There were 700 students randomly selected from a population (N=2640) of students enrolled in the eight grades in a large urban school district in the southwestern United States. These 700 students were randomly divided into seven subgroups so that each subgroup had 100 students. The responses of all those students to three subtests of the mathematics achievement battery, which included mathematical concepts (44 items), problem solving (32 items), and computation (45 items), were analyzed to obtain the item difficulties and a total score for each student. The items in each subtest then were rearranged based on the item difficulties from the highest to the lowest value. In each subgroup, the method using one, two, three, four, five, and unlimited consecutive failures as the ceiling rules were applied to score the individual responses. The total score for each individual was the sum of the correct responses prior to the point described by the ceiling rule. The correct responses after the ceiling rule were not part of the total score. The estimate of reliability in each method was computed by alpha coefficient of the SPSS-X. The results of this study indicated that the estimate of reliability using two, three, four, and five consecutive failures as the ceiling rules were an improvement over the methods using one and unlimited consecutive failures.
9

The segmentation problem in radiation therapy

Engelbeen, Céline 30 June 2010 (has links)
The segmentation problem arises in the elaboration of a radiation therapy plan. After the cancer has been diagnosed and the radiation therapy sessions have been prescribed, the physician has to locate the tumor as well as the organs situated in the radiation field, called the organs at risk. The physician also has to determine the different dosage he wants to deliver in each of them and has to define a lower bound on the dosage for the tumor (which represents the minimum amount of radiation that is needed to have a sufficient control of the tumor) and an upper bound for each organ at risk (which represents the maximum amount of radiation that an organ can receive without damaging). Designing a radiation therapy plan that respects these different bounds of dosage is a complex optimization problem that is usually tackled in three steps. The segmentation problem is one of them. Mathematically, the segmentation problem amounts to decomposing a given nonnegative integer matrix A into a nonnegative integer linear combination of some binary matrices. These matrices have to respect the consecutive ones property. In clinical applications several constraints may arise that reduce the set of binary matrices which respect the consecutive ones property that we can use. We study some of them, as the interleaf distance constraint, the interleaf motion constraint, the tongue-and-groove constraint and the minimum separation constraint. We consider here different versions of the segmentation problem with different objective functions. Hence we deal with the beam-on time problem in order to minimize the total time during which the patient is irradiated. We study this problem under the interleaf distance and the interleaf motion constraints. We consider as well this last problem under the tongue-and-groove constraint in the binary case. We also take into account the cardinality and the lex-min problem. Finally, we present some results for the approximation problem. /Le problème de segmentation intervient lors de l'élaboration d'un plan de radiothérapie. Après que le médecin ait localisé la tumeur ainsi que les organes se situant à proximité de celle-ci, il doit aussi déterminer les différents dosages qui devront être délivrés. Il détermine alors une borne inférieure sur le dosage que doit recevoir la tumeur afin d'en avoir un contrôle satisfaisant, et des bornes supérieures sur les dosages des différents organes situés dans le champ. Afin de respecter au mieux ces bornes, le plan de radiothérapie doit être préparé de manière minutieuse. Nous nous intéressons à l'une des étapes à réaliser lors de la détermination de ce plan: l'étape de segmentation. Mathématiquement, cette étape consiste à décomposer une matrice entière et positive donnée en une combinaison positive entière linéaire de certaines matrices binaires. Ces matrices binaires doivent satisfaire la contrainte des uns consécutifs (cette contrainte impose que les uns de ces matrices soient regroupés en un seul bloc sur chaque ligne). Dans les applications cliniques, certaines contraintes supplémentaires peuvent restreindre l'ensemble des matrices binaires ayant les uns consécutifs (matrices 1C) que l'on peut utiliser. Nous en avons étudié certaines d'entre elles comme celle de la contrainte de chariots, la contrainte d'interdiciton de chevauchements, la contrainte tongue-and-groove et la contrainte de séparation minimum. Le premier problème auquel nous nous intéressons est de trouver une décomposition de la matrice donnée qui minimise la somme des coefficients des matrices binaires. Nous avons développé des algorithmes polynomiaux qui résolvent ce problème sous la contrainte de chariots et/ou la contrainte d'interdiction de chevauchements. De plus, nous avons pu déterminer que, si la matrice donnée est une matrice binaire, on peut trouver en temps polynomial une telle décomposition sous la contrainte tongue-and-groove. Afin de diminuer le temps de la séance de radiothérapie, il peut être désirable de minimiser le nombre de matrices 1C utilisées dans la décomposition (en ayant pris soin de préalablement minimiser la somme des coefficients ou non). Nous faisons une étude de ce problème dans différents cas particuliers (la matrice donnée n'est constituée que d'une colonne, ou d'une ligne, ou la plus grande entrée de celle-ci est bornée par une constante). Nous présentons de nouvelles bornes inférieures sur le nombre de matrices 1C ainsi que de nouvelles heuristiques. Finalement, nous terminons par étudier le cas où l'ensemble des matrices 1C ne nous permet pas de décomposer exactement la matrice donnée. Le but est alors de touver une matrice décomposable qui soit aussi proche que possible de la matrice donnée. Après avoir examiné certains cas polynomiaux nous prouvons que le cas général est difficile à approximer avec une erreur additive de O(mn) où m et n représentent les dimensions de la matrice donnée.
10

Comparison between Simultaneous and Traditional Consecutive Malolactic Fermentations in Wine

Pan, Wei 07 December 2012 (has links)
Successfully inducing malolactic fermentation in the production of grape wines can be challenging, especially in wines after finishing alcoholic fermentation with limited energy sources, low pH values and high ethanol concentrations. In this thesis, the kinetics of several chemicals of enological relevance were studied in a white wine (Chardonnay) and a red wine (Cab Franc) vinified by traditional, consecutive alcoholic (AF) and malolactic fermentations (MLF), and simultaneous AF/MLF, where bacteria were co-inoculated with yeast. The Chardonnay must was adjusted to four pH values (3.20, 3.35, 3.50 or 3.65), the cab Franc was kept as original pH value (3.56) and the concentrations of sugars, organic acids as well as acetaldehyde were followed throughout the fermentations. For Chardonnay the degradation of glucose and fructose was slower at the lowest must pH value (3.20) and independent from the time of bacterial inoculation. In all cases, malolactic conversion was faster after yeast-bacterial co-inoculation and was completed in simultaneous treatments at pH values of 3.35-3.65, and consecutive treatments at pH 3.50 and 3.65. No statistically significant difference was observed among the final acetic acid concentration, in all inoculation and pH treatments. For Cab Franc, it confirmed that co-inoculation shortened the fermentation periods while having minor effects on other parameters. Overall, simultaneous AF/MLF allowed for greatly reduced fermentation times, while the must pH remained a strong factor for fermentation success and determined the final concentration of various wine components. The time of inoculation influenced formation and degradation kinetics of organic acids and acetaldehyde significantly.

Page generated in 0.1053 seconds