1 |
Selected papers on colorimetric theory and colour modelingOulton, David January 2010 (has links)
The annotated papers that are submitted as part of this thesis consider the phenomenon of colour at the fundamental, technical, and application levels, and they were written and published by Oulton between 1990 and 2009. The papers disclose significant insights by the author into colorimetric modeling theory and report aspects of the author's work that have led to commercially successful practical applications. The academic significance of these papers is evident in their citation record; their practical value is shown by a number of successful industrial collaboration programmes, and through the award of national prizes for innovation by the Worshipful Company of Dyers, and the Society of Dyers and Colorists. The published research primarily concerns digital devices that either capture or reproduce coloured images. For example, the research problem of how to calibrate the colour on computer CRT screens, which was thought at the time to be intractable, was reported by Oulton in paper 1 to be solved at the two to three significant figure level of colorimetric accuracy. This world leading level of accuracy was subsequently confirmed using a comprehensive data set in paper 7, and has been exploited internationally in commercial computer aided design and colour communication systems by Textile Computer Systems Ltd and Datacolor Inc. Further research problems resolved by Oulton in the presented papers include how to predict the colorimetric sensitivity of dye recipes; how to design, test, and fine-tune the spectral response of digital cameras; and how the individual customers in a shop can be tracked automatically to reveal their buying behavior, using coloured CCTV images.The challenge to the standard CIE colorimetric model posed by the results of Dr W.A. Thornton was analyzed and satisfactorily explained by Oulton in papers 2, 3 and 4. It is also shown that Thornton's results do not in any way compromise either the practice of colorimetry based on the CIE Standard Observer, or the validity of its quantifying data sets. It is also additionally shown under the annotation of paper 4 presented here, that the success of the CIE colorimetric model has a clearly demonstrable theoretical basis.In all but one of the presented papers the convention is maintained that the standard CIE XYZ co-ordinate model should be used as the reference basis, when modeling the properties of colour and quantifying its uses. The final paper to be published (and presented here as paper 4) challenges this convention and demonstrates that a context free and formally defined alternative reference basis may be used in colorimetric modeling with significant advantage. It is also shown in paper 4 that under the specified axioms, any cross dependency that is potentially non linear can in principle be resolved into its component scalar and additive relationships, and that the causes of scalar non linearity may be characterized independently from the causes of linearly additive cross dependency. The result is a widely applicable analytical and experimental design method for resolving complex cross dependent relationships in general and in particular, for resolving those between the spectral visual stimuli and the psychophysical response to them.
|
2 |
A Multi-Wilkinson Power Divider Based Complex Reflection Coefficient DetectorCooper, James Roger 19 May 2010 (has links)
In the field of applied electromagnetics, there is always a need to create new methods for electrical characterization of materials, systems, devices, etc. Many applications need small and/or inexpensive equipment in performing these characterizations. The current method for making measurements of electrical properties at frequencies above 300 MHz, the transmission/reflection method, has severe limitations in these areas due large size and high price of the necessary equipment for making them. Therefore, presented herein is the conceptualization, design and analysis of a complex reflection coefficient detector which is relatively small, lightweight, and inexpensive.
A reflection coefficient detector is a device designed to isolate and compare a driving signal against a reflected signal. The reflection of the second signal is caused by a mismatch between the device's output impedance and a load's input impedance. By comparing the driving, or transmitted, signal and the reflected signal, the reflection coefficient at the boundary can be calculated. This coefficient can be used to calculate a load's input impedance, or a material's permittivity when combined with an attached probe's characteristics.
The reflection coefficient detector presented is built using microstrip and surface mount components. This makes the device comparably cheap. Its design is based upon five Wilkinson Power Dividers which lends itself to be scaled down for implementation in on-chip, and other micro- and nano- scale systems.
The accuracy and functionality of the device will be demonstrated through the use of S-Parameters measurements and CAD simulations. Through this, it will be shown that the device is a practical form of making measurements in applications which are otherwise restricted to certain limitations. In closing, applications, alternative designs and future advancements of the complex reflection coefficient detector will be discussed.
|
3 |
Algorithmique et complexité des systèmes à compteurs / Algorithmics and complexity of counter machinesBlondin, Michael 29 June 2016 (has links)
L'un des aspects fondamentaux des systèmes informatiques modernes, et en particulier des systèmes critiques, est la possibilité d'exécuter plusieurs processus, partageant des ressources communes, de façon simultanée. De par leur nature concurrentielle, le bon fonctionnement de ces systèmes n'est assuré que lorsque leurs comportements ne dépendent pas d'un ordre d'exécution prédéterminé. En raison de cette caractéristique, il est particulièrement difficile de s'assurer qu'un système concurrent ne possède pas de faille. Dans cette thèse, nous étudions la vérification formelle, une approche algorithmique qui vise à automatiser la vérification du bon fonctionnement de systèmes concurrents en procédant par une abstraction vers des modèles mathématiques. Nous considérons deux de ces modèles, les réseaux de Petri et les systèmes d'addition de vecteurs, et les problèmes de vérification qui leur sont associés. Nous montrons que le problème d'accessibilité pour les systèmes d'addition de vecteurs (avec états) à deux compteurs est PSPACE-complet, c'est-à-dire complet pour la classe des problèmes solubles à l'aide d'une quantité polynomiale de mémoire. Nous établissons ainsi la complexité calculatoire précise de ce problème, répondant à une question demeurée ouverte depuis plus de trente ans. Nous proposons une nouvelle approche au problème de couverture pour les réseaux de Petri, basée sur un algorithme arrière guidé par une caractérisation logique de l'accessibilité dans les réseaux de Petri dits continus. Cette approche nous a permis de mettre au point un nouvel algorithme qui s'avère particulièrement efficace en pratique, tel que démontré par notre implémentation logicielle nommée QCover. Nous complétons ces résultats par une étude des systèmes de transitions bien structurés qui constituent une abstraction générale des systèmes d'addition de vecteurs et des réseaux de Petri. Nous considérons le cas des systèmes de transitions bien structurés à branchement infini, une classe qui inclut les réseaux de Petri possédant des arcs pouvant consommer ou produire un nombre arbitraire de jetons. Nous développons des outils mathématiques facilitant l'étude de ces systèmes et nous délimitons les frontières au-delà desquelles la décidabilité des problèmes de terminaison, de finitude, de maintenabilité et de couverture est perdue. / One fundamental aspect of computer systems, and in particular of critical systsems, is the ability to run simultaneously many processes sharing resources. Such concurrent systems only work correctly when their behaviours are independent of any execution ordering. For this reason, it is particularly difficult to ensure the correctness of concurrent systems.In this thesis, we study formal verification, an algorithmic approach to the verification of concurrent systems based on mathematical modeling. We consider two of the most prominent models, Petri nets and vector addition systems, and their usual verification problems considered in the literature.We show that the reachability problem for vector addition systems (with states) restricted to two counters is PSPACE-complete, that is, it is complete for the class of problems solvable with a polynomial amount of memory. Hence, we establish the precise computational complexity of this problem, left open for more than thirty years.We develop a new approach to the coverability problem for Petri nets which is primarily based on applying forward coverability in continuous Petri nets as a pruning criterion inside a backward coverability framework. We demonstrate the effectiveness of our approach by implementing it in a tool named QCover.We complement these results with a study of well-structured transition systems which form a general abstraction of vector addition systems and Petri nets. We consider infinitely branching well-structured transition systems, a class that includes Petri nets with special transitions that may consume or produce arbitrarily many tokens. We develop mathematical tools in order to study these systems and we delineate the decidability frontier for the termination, boundedness, maintainability and coverability problems for these systems.
|
4 |
Algorithmique et complexité des systèmes à compteursBlondin, Michael 04 1900 (has links)
Réalisé en cotutelle avec l'École normale supérieure de Cachan – Université Paris-Saclay / L'un des aspects fondamentaux des systèmes informatiques modernes, et en particulier des systèmes critiques, est la possibilité d'exécuter plusieurs processus, partageant des ressources communes, de façon simultanée. De par leur nature concurrentielle, le bon fonctionnement de ces systèmes n'est assuré que lorsque leurs comportements ne dépendent pas d'un ordre d'exécution prédéterminé. En raison de cette caractéristique, il est particulièrement difficile de s'assurer qu'un système concurrent ne possède pas de faille.
Dans cette thèse, nous étudions la vérification formelle, une approche algorithmique qui vise à automatiser la vérification du bon fonctionnement de systèmes concurrents en procédant par une abstraction vers des modèles mathématiques. Nous considérons deux de ces modèles, les réseaux de Petri et les systèmes d'addition de vecteurs, et les problèmes de vérification qui leur sont associés.
Nous montrons que le problème d'accessibilité pour les systèmes d'addition de vecteurs (avec états) à deux compteurs est PSPACE-complet, c'est-à-dire complet pour la classe des problèmes solubles à l'aide d'une quantité polynomiale de mémoire. Nous établissons ainsi la complexité calculatoire précise de ce problème, répondant à une question demeurée ouverte depuis plus de trente ans.
Nous proposons une nouvelle approche au problème de couverture pour les réseaux de Petri, basée sur un algorithme arrière guidé par une caractérisation logique de l'accessibilité dans les réseaux de Petri continus. Cette approche nous a permis de mettre au point un nouvel algorithme qui s'avère particulièrement efficace en pratique, tel que démontré par notre implémentation logicielle nommée QCover.
Nous complétons ces résultats par une étude des systèmes de transitions bien structurés qui constituent une abstraction générale des systèmes d'addition de vecteurs et des réseaux de Petri. Nous considérons le cas des systèmes de transitions bien structurés à branchement infini, une classe qui inclut les réseaux de Petri possédant des arcs pouvant consommer ou produire un nombre arbitraire de jetons. Nous développons des outils mathématiques facilitant l'étude de ces systèmes et nous délimitons les frontières au-delà desquelles la décidabilité des problèmes de terminaison, de finitude, de maintenabilité et de couverture est perdue. / One fundamental aspect of computer systems, and in particular of critical systems, is the ability to run simultaneously many processes sharing resources. Such concurrent systems only work correctly when their behaviours are independent of any execution ordering. For this reason, it is particularly difficult to ensure the correctness of concurrent systems.
In this thesis, we study formal verification, an algorithmic approach to the verification of concurrent systems based on mathematical modeling. We consider two of the most prominent models, Petri nets and vector addition systems, and their usual verification problems considered in the literature.
We show that the reachability problem for vector addition systems (with states) restricted to two counters is PSPACE-complete, that is, it is complete for the class of problems solvable with a polynomial amount of memory. Hence, we establish the precise computational complexity of this problem, left open for more than thirty years.
We develop a new approach to the coverability problem for Petri nets which is primarily based on applying forward coverability in continuous Petri nets as a pruning criterion inside a backward coverability framework. We demonstrate the effectiveness of our approach by implementing it in a tool named QCover.
We complement these results with a study of well-structured transition systems which form a general abstraction of vector addition systems and Petri nets. We consider infinitely branching well-structured transition systems, a class that includes Petri nets with special transitions that may consume or produce arbitrarily many tokens. We develop mathematical tools in order to study these systems and we delineate the decidability frontier for the termination, boundedness, maintainability and coverability problems.
|
5 |
Games and Probabilistic Infinite-State SystemsSandberg, Sven January 2007 (has links)
<p>Computer programs keep finding their ways into new safety-critical applications, while at the same time growing more complex. This calls for new and better methods to verify the correctness of software. We focus on one approach to verifying systems, namely that of <i>model checking</i>. At first, we investigate two categories of problems related to model checking: <i>games</i> and <i>stochastic infinite-state systems</i>. In the end, we join these two lines of research, by studying <i>stochastic infinite-state games</i>.</p><p>Game theory has been used in verification for a long time. We focus on finite-state 2-player parity and limit-average (mean payoff) games. These problems have applications in model checking for the <i>μ</i>-calculus, one of the most expressive logics for programs. We give a simplified proof of memoryless determinacy. The proof applies <i>both</i> to parity and limit-average games. Moreover, we suggest a strategy improvement algorithm for limit-average games. The algorithm is discrete and strongly subexponential.</p><p>We also consider probabilistic infinite-state systems (Markov chains) induced by three types of models. <i>Lossy channel systems (LCS)</i> have been used to model processes that communicate over an unreliable medium. <i>Petri nets</i> model systems with unboundedly many parallel processes. <i>Noisy Turing machines</i> can model computers where the memory may be corrupted in a stochastic manner. We introduce the notion of <i>eagerness</i> and prove that all these systems are eager. We give a scheme to approximate the value of a reward function defined on paths. Eagerness allows us to prove that the scheme terminates. For probabilistic LCS, we also give an algorithm that approximates the limit-average reward. This quantity describes the long-run behavior of the system.</p><p>Finally, we investigate Büchi games on probabilistic LCS. Such games can be used to model a malicious cracker trying to break a network protocol. We give an algorithm to solve these games.</p>
|
6 |
Games and Probabilistic Infinite-State SystemsSandberg, Sven January 2007 (has links)
Computer programs keep finding their ways into new safety-critical applications, while at the same time growing more complex. This calls for new and better methods to verify the correctness of software. We focus on one approach to verifying systems, namely that of model checking. At first, we investigate two categories of problems related to model checking: games and stochastic infinite-state systems. In the end, we join these two lines of research, by studying stochastic infinite-state games. Game theory has been used in verification for a long time. We focus on finite-state 2-player parity and limit-average (mean payoff) games. These problems have applications in model checking for the μ-calculus, one of the most expressive logics for programs. We give a simplified proof of memoryless determinacy. The proof applies both to parity and limit-average games. Moreover, we suggest a strategy improvement algorithm for limit-average games. The algorithm is discrete and strongly subexponential. We also consider probabilistic infinite-state systems (Markov chains) induced by three types of models. Lossy channel systems (LCS) have been used to model processes that communicate over an unreliable medium. Petri nets model systems with unboundedly many parallel processes. Noisy Turing machines can model computers where the memory may be corrupted in a stochastic manner. We introduce the notion of eagerness and prove that all these systems are eager. We give a scheme to approximate the value of a reward function defined on paths. Eagerness allows us to prove that the scheme terminates. For probabilistic LCS, we also give an algorithm that approximates the limit-average reward. This quantity describes the long-run behavior of the system. Finally, we investigate Büchi games on probabilistic LCS. Such games can be used to model a malicious cracker trying to break a network protocol. We give an algorithm to solve these games.
|
Page generated in 0.0971 seconds