Spelling suggestions: "subject:"large"" "subject:"marge""
231 |
Large display interaction via multiple acceleration curves on a touchpadEsakia, Andrey 23 January 2014 (has links)
Large, high resolution displays combine high pixel density with ample physical dimensions. Combination of these two factors creates a multi-scale workspace where object targeting requires both high speed and high accuracy for nearby and far apart targeting. Modern operating systems support dynamic control-display gain adjustment (i.e. cursor acceleration) that helps to maintain both speed and accuracy. However, very large high resolution displays require broad range of control-display gain ratios. Current interaction techniques attempt to solve the problem by utilizing multiple modes of interaction, where different modes provide different levels of pointer precision. We are investigating the question of the value of allowing users to dynamically choose granularity levels for continuous pointing within single mode of interaction via multiple acceleration curves. Our solution offers different cursor acceleration curves depending on the targeting conditions, thus broadening the range of control-display ratios. Our approach utilizes a consumer multitouch touchpad that allows fast and accurate detection of multiple fingers. A user can choose three different acceleration curves based on how many fingers are used for cursor positioning. Our goal is to investigate the effects of such multi-scale interaction and to compare it against standard single curve interaction. / Master of Science
|
232 |
Portraiture and the Large Lecture: Storying One Chemistry Professor's Practical KnowledgeEddleton, Jeannine E. 04 December 2012 (has links)
Practical knowledge, as defined by Freema Elbaz (1983), is a complex, practically oriented set of understandings which teachers use to actively shape and direct their work. The goal of this study is the construction of a social science portrait that illuminates the practical knowledge of a large lecture professor of general chemistry at a public research university in the southeast. This study continues Elbaz's (1981) work on practical knowledge with the incorporation of a qualitative and intentionally interventionist methodology which "blurs the boundaries of aesthetics and empiricism in an effort to capture the complexity, dynamics, and subtlety of human experience and organizational life," (Lawrence-Lightfoot and Davis, 1997).
This collection of interviews, observations, writings, and reflections is designed for an eclectic audience with the intent of initiating conversation on the topic of the large lecture and is a purposeful attempt to link research and practice.
Social science portraiture is uniquely suited to this intersection of researcher and researched, the perfect combination of methodology and analysis for a project that is both product and praxis.
The following research questions guide the study.
Are aspects of Elbaz's practical knowledge identifiable in the research conversations conducted with a large lecture college professor?
Is practical knowledge identifiable during observations of Patricia's large lecture chemistry classroom practice?
Freema Elbaz conducted research conversations with Sarah, a high school classroom and writing resource teacher who conducted much of her teaching work one on one with students. Patricia's practice differs significantly from Sarah's with respect to subject matter and to scale. / Ph. D.
|
233 |
InjectBench: An Indirect Prompt Injection Benchmarking FrameworkKong, Nicholas Ka-Shing 20 August 2024 (has links)
The integration of large language models (LLMs) with third party applications has allowed for LLMs to retrieve information from up-to-date or specialized resources. Although this integration offers numerous advantages, it also introduces the risk of indirect prompt injection attacks. In such scenarios, an attacker embeds malicious instructions within the retrieved third party data, which when processed by the LLM, can generate harmful and untruthful outputs for an unsuspecting user. Although previous works have explored how these attacks manifest, there is no benchmarking framework to evaluate indirect prompt injection attacks and defenses at scale, limiting progress in this area. To address this gap, we introduce InjectBench, a framework that empowers the community to create and evaluate custom indirect prompt injection attack samples. Our study demonstrate that InjectBench has the capabilities to produce high quality attack samples that align with specific attack goals, and that our LLM evaluation method aligns with human judgement. Using InjectBench, we investigate the effects of different components of an attack sample on four LLM backends, and subsequently use this newly created dataset to do preliminary testing on defenses against indirect prompt injections. Experiment results suggest that while more capable models are susceptible to attacks, they are better equipped at utilizing defense strategies. To summarize, our work helps the research community to systematically evaluate features of attack samples and defenses by introducing a dataset creation and evaluation framework. / Master of Science / Large language models (LLMs), such as ChatGPT, are now able to retrieve up-to-date information from online resources like Google Flights or Wikipedia. This ultimately allows the LLM to utilize current information to generate truthful, helpful and accurate responses. Despite the numerous advantages, it also exposes a user to a new vector of attacks known as indirect prompt injections. In this attack, an attacker will write a instruction onto an online resource that an LLM will process when retrieved from the online resource. The primary aim of the attacker is to instruct the LLM to say something it is not supposed to, and thus may manifest as a blatant lie or misinformation given to the user. Prior works have studied and showcased the harmfulness of this attack, however not many works have tried to understand which LLMs are more vulnerable to indirect prompt injection attacks and how we may defend from them. We believe that this is mainly due to the non-availability of a benchmarking dataset which allows us to test LLMs and new defenses. To address this gap, we introduce InjectBench, a methodology that allows the automated creation of these benchmarking datasets, and the evaluation of LLMs and defenses. We show that InjectBench can produce a high quality dataset that we can customize to specific attack goals, and that our evaluation process is accurate and agrees with human judgement. Using the benchmarking dataset created from InjectBench, we evaluate four LLMs and investigate defenses for indirect prompt injection attacks.
|
234 |
A cauchy-stress based solution for a necking elastic constitutive model under large deformationOlley, Peter January 2006 (has links)
No / A finite element based method for solution of large-deformation hyperelastic constitutive models is developed, which solves the Cauchy-stress balance equation using a single rotation of stress from principal directions to a fixed co-ordinate system. Features of the method include stress computation by central differencing of the hyperelastic energy function, mixed integration-order incompressibility enforcement, and an iterative solution method that employs notional `small strain¿ stiffness. The method is applied to an interesting and difficult elastic model that replicates polymer `necking¿; the method is shown to give good agreement with published results from a well-established finite element package, and with published experimental results. It is shown that details of the manner in which incompressibility is enforced affects whether key experimental phenomena are clearly resolved.
|
235 |
Managing Personnel for Milking Parlors on Large HerdsVanBaale, Matthew, Smith, John 04 1900 (has links)
4 pp. / As today's dairy consolidates, cows are being milked more rapidly through larger milking parlors on larger dairies than ever before. Because milk is the primary commodity and source of income for producers, the harvesting of milk is the single most important job on any dairy. Producing high-quality milk to maximize yields and economic value requires effective parlor management, an enormous challenge for producers. Managing large parlors includes managing labor, milking equipment, as well as monitoring and evaluating parlor performance. The goal of parlor management for large herds is to enhance profits by maximizing milk yield, udder health, and overall efficiency. This may be accomplished by adequately training and motivating employees to efficiently milking clean, dry, stimulated teats using proper milking hygiene.
|
236 |
QUALIFICATION RESEARCH FOR RELIABLE, CUSTOM LSI/VLSI ELECTRONICS.Matsumori, Barry Alan. January 1985 (has links)
No description available.
|
237 |
Exploitation du contenu pour l'optimisation du stockage distribué / Leveraging content properties to optimize distributed storage systemsKloudas, Konstantinos 06 March 2013 (has links)
Les fournisseurs de services de cloud computing, les réseaux sociaux et les entreprises de gestion des données ont assisté à une augmentation considérable du volume de données qu'ils reçoivent chaque jour. Toutes ces données créent des nouvelles opportunités pour étendre la connaissance humaine dans des domaines comme la santé, l'urbanisme et le comportement humain et permettent d'améliorer les services offerts comme la recherche, la recommandation, et bien d'autres. Ce n'est pas par accident que plusieurs universitaires mais aussi les médias publics se référent à notre époque comme l'époque “Big Data”. Mais ces énormes opportunités ne peuvent être exploitées que grâce à de meilleurs systèmes de gestion de données. D'une part, ces derniers doivent accueillir en toute sécurité ce volume énorme de données et, d'autre part, être capable de les restituer rapidement afin que les applications puissent bénéficier de leur traite- ment. Ce document se concentre sur ces deux défis relatifs aux “Big Data”. Dans notre étude, nous nous concentrons sur le stockage de sauvegarde (i) comme un moyen de protéger les données contre un certain nombre de facteurs qui peuvent les rendre indisponibles et (ii) sur le placement des données sur des systèmes de stockage répartis géographiquement, afin que les temps de latence perçue par l'utilisateur soient minimisés tout en utilisant les ressources de stockage et du réseau efficacement. Tout au long de notre étude, les données sont placées au centre de nos choix de conception dont nous essayons de tirer parti des propriétés de contenu à la fois pour le placement et le stockage efficace. / Cloud service providers, social networks and data-management companies are witnessing a tremendous increase in the amount of data they receive every day. All this data creates new opportunities to expand human knowledge in fields like healthcare and human behavior and improve offered services like search, recommendation, and many others. It is not by accident that many academics but also public media refer to our era as the “Big Data” era. But these huge opportunities come with the requirement for better data management systems that, on one hand, can safely accommodate this huge and constantly increasing volume of data and, on the other, serve them in a timely and useful manner so that applications can benefit from processing them. This document focuses on the above two challenges that come with “Big Data”. In more detail, we study (i) backup storage systems as a means to safeguard data against a number of factors that may render them unavailable and (ii) data placement strategies on geographically distributed storage systems, with the goal to reduce the user perceived latencies and the network and storage resources are efficiently utilized. Throughout our study, data are placed in the centre of our design choices as we try to leverage content properties for both placement and efficient storage.
|
238 |
Observations de pulsars avec le Fermi gamma-ray space telescopeParent, Damien 13 November 2009 (has links)
Le Large Area Telescope à bord du satellite Fermi, lancé le 11 juin 2008, est un télescope spatial observant l'univers des hautes énergies. L'instrument couvre l'intervalle en énergie de 20MeV à 300GeV avec une sensibilité nettement améliorée et la capacité de localiser des sources ponctuelles. Il détecte les photons ? par leur conversion en paire électron- positron, et mesure leur direction et leur énergie grâce à un trajectographe et un calorimètre. Cette thèse présente les courbes de lumières et les mesures spectrales résolues en phase des pulsars radio et gamma détectés par le LAT. La mesure des paramètres spectraux (flux, indice spectral, et énergie de coupure) dépend des fonctions de réponse de l'instrument (IRFs). Une méthode développée pour la validation en orbite de la surface efficace est présentée en utilisant le pulsar de Vela. Les efficacités des coupures entre les données du LAT et les données simulées sont comparées à chaque niveau de la rejection du fond. Les résultats de cette analyse sont propagés vers les IRFs pour évaluer les systématiques des mesures spectrales. La dernière partie de cette thèse présente les découvertes de nouveaux pulsars ? individuels tels que PSR J0205+6449, J2229+6114, et J1048-5832 à partir des données du LAT et des éphémérides radio et X. Des analyses temporelles et spectrales sont investies dans le but de contraindre les modèles d'émission gamma. Finalement, nous discutons les propriétés d'une large population de pulsars gamma détectés par le LAT, incluant les pulsars normaux et les pulsars milliseconde. / The Large Area Telescope (LAT) on Fermi, launched on 2008 June 11, is a space telescope to explore the high energy ?-ray universe. The instrument covers the energy range from 20MeV to 300GeV with greatly improved sensitivity and ability to localize ?-ray point sources. It detects ?-rays through conversion to electron-positron pairs and measurement of their direction in a tracker and their energy in a calorimeter. This thesis presents the ?-ray light curves and the phase-resolved spectral measurements of radio-loud gamma-ray pulsars detected by the LAT. The measurement of pulsar spectral parameters (i.e. integrated flux, spectral index, and energy cut-off) depends on the instrument response functions (IRFs). A method developed for the on-orbit validation of the effective area is presented using the Vela pulsar. The cut efficiencies between the real data and the simulated data are compared at each stage of the background rejection. The results are then propagated to the IRFs, allowing the systematic uncertainties of the spectral parameters to be estimated. The last part of this thesis presents the discoveries, using both the LAT observations and the radio and X ephemeredes, of new individual ?-ray pulsars such as PSR J0205+6449, and the Vela-like pulsars J2229+6114 and J1048-5832. Timing and spectral analysis are investigated in order to constrain the ?-ray emission model. In addition, we discuss the properties of a large population of ?-ray pulsars detected by the LAT, including normal pulsars, and millisecond pulsars.
|
239 |
Performance analysis of large-scale resource-bound computer systemsPourranjbar, Alireza January 2015 (has links)
We present an analysis framework for performance evaluation of large-scale resource-bound (LSRB) computer systems. LSRB systems are those whose resources are continually in demand to serve resource users, who appear in large populations and cause high contention. In these systems, the delivery of quality service is crucial, even in the event of resource failure. Therefore, various techniques have been developed for evaluating their performance. In this thesis, we focus on the technique of quantitative modelling, where in order to study a system, first its model is constructed and then the system’s behaviour is analysed via the model. A number of high level formalisms have been developed to aid the task of model construction. We focus on PEPA, a stochastic process algebra that supports compositionality and enables us to easily build complex LSRB models. In spite of this advantage, however, the task of analysing LSRB models still poses unresolved challenges. LSRB models give rise to very large state spaces. This issue, known as the state space explosion problem, renders the techniques based on discrete state representation, such as numerical Markovian analysis, computationally expensive. Moreover, simulation techniques, such as Gillespie’s stochastic simulation algorithm, are also computationally demanding, as numerous trajectories need to be collected. Furthermore, as we show in our first contribution, the techniques based on the mean-field theory or fluid flow approximation are not readily applicable to this case. In LSRB models, resources are not assumed to be present in large populations and models exhibit highly noisy and stochastic behaviour. Thus, the mean-field deterministic behaviour might not be faithful in capturing the system’s randomness and is potentially too crude to show important aspects of their behaviours. In this case, the modeller is unable to obtain important performance indicators, such as the reliability measures of the system. Considering these limitations, we contribute the following analytical methods particularly tailored to LSRB models. First, we present an aggregation method. The aggregated model captures the evolution of only the system’s resources and allows us to efficiently derive a probability distribution over the configurations they experience. This distribution provides full faithfulness for studying the stochastic behaviour of resources. The aggregation can be applied to all LSRB models that satisfy a syntactic aggregation condition, which can be quickly checked syntactically. We present an algorithm to generate the aggregated model from the original model when this condition is satisfied. Second, we present a procedure to efficiently detect time-scale near-complete decomposability (TSND). The method of TSND allows us to analyse LSRB models at a reduced cost, by dividing their state spaces into loosely coupled blocks. However, one important input is a partition of the transitions defined in the model, categorising them into slow or fast. Forming the necessary partition by the analysis of the model’s complete state space is costly. Our process derives this partition efficiently, by relying on a theorem stating that our aggregation preserves the original model’s partition and therefore, it can be derived by an efficient reachability analysis on the aggregated state space. We also propose a clustering algorithm to implement this reachability analysis. Third, we present the method of conditional moments (MCM) to be used on LSRB models. Using our aggregation, a probability distribution is formed over the configurations of a model’s resources. The MCM outputs the time evolution of the conditional moments of the marginal distribution over resource users given the configurations of resources. Essentially, for each such configuration, we derive measures such as conditional expectation, conditional variance, etc. related to the dynamics of users. This method has a high degree of faithfulness and allows us to capture the impact of the randomness of the behaviour of resources on the users. Finally, we present the advantage of the methods we proposed in the context of a case study, which concerns the performance evaluation of a two-tier wireless network constructed based on the femto-cell macro-cell architecture.
|
240 |
Contrôle de la propagation et de la recherche dans un solveur de contraintes / Controlling propagation and search within a constraint solverPrud'homme, Charles 28 February 2014 (has links)
La programmation par contraintes est souvent décrite, utopiquement, comme un paradigme déclaratif dans lequel l’utilisateur décrit son problème et le solveur le résout. Bien entendu, la réalité des solveurs de contraintes est plus complexe, et les besoins de personnalisation des techniques de modélisation et de résolution évoluent avec le degré d’expertise des utilisateurs. Cette thèse porte sur l’enrichissement de l’arsenal des techniques disponibles dans les solveurs de contraintes. D’une part, nous étudions la contribution d’un système d’explications à l’exploration de l’espace de recherche, dans le cadre spécifique d’une recherche locale. Deux heuristiques de voisinages génériques exploitant singulièrement les explications sont décrites. La première se base sur la difficulté de réparer une solution partiellement détruite, la seconde repose sur la nature non-optimale de la solution courante. Ces heuristiques mettent à jour la structure interne des problèmes traités pour construire des voisins de bonne qualité pour une recherche à voisinage large. Elles sont complémentaires d’autres heuristiques de voisinages génériques, avec lesquels elles peuvent être combinées efficacement. De plus, nous proposons de rendre le système d’explications paresseux afin d’en minimiser l’empreinte. D’autre part, nous effectuons un état des lieux des savoir-faire relatifs aux moteurs de propagation pour les solveurs de contraintes. Ces données sont exploitées opérationnellement à travers un langage dédié qui permet de personnaliser la propagation au sein d’un solveur, en fournissant des structures d’implémentation et en définissant des points de contrôle dans le solveur. Ce langage offre des concepts de haut niveau permettant à l’utilisateur d’ignorer les détails de mise en œuvre du solveur, tout en conservant un bon niveau de flexibilité et certaines garanties. Il permet l’expression de schémas de propagation spécifiques à la structure interne de chaque problème. La mise en œuvre et les expérimentations ont été effectués dans le solveur de contraintes Choco. Cette thèse a donné lieu à une nouvelle version de l’outil globalement plus efficace et nativement expliqué. / Constraint programming is often described, idealistically, as a declarative paradigm in which the user describes the problem and the solver solves it. Obviously, the reality of constraint solvers is more complex, and the needs in customization of modeling and solving techniques change with the level of expertise of users. This thesis focuses on enriching the arsenal of available techniques in constraint solvers. On the one hand, we study the contribution of an explanation system to the exploration of the search space in the specific context of a local search. Two generic neighborhood heuristics which exploit explanations singularly are described. The first one is based on the difficulty of repairing a partially destroyed solution, the second one is based on the non-optimal nature of the current solution. These heuristics discover the internal structure of the problems to build good neighbors for large neighborhood search. They are complementary to other generic neighborhood heuristics, with which they can be combined effectively. In addition, we propose to make the explanation system lazy in order to minimize its footprint. On the other hand, we undertake an inventory of know-how relative to propagation engines of constraint solvers. These data are used operationally through a domain specific language that allows users to customize the propagation schema, providing implementation structures and defining check points within the solver. This language offershigh-level concepts that allow the user to ignore the implementation details, while maintaining a good level of flexibility and some guarantees. It allows the expression of propagation schemas specific to the internal structure of each problem solved. Implementation and experiments were carried out in the Choco constraint solver, developed in this thesis. This has resulted in a new version of the overall effectiveness and natively explained tool.
|
Page generated in 0.041 seconds