Spelling suggestions: "subject:"erroneous"" "subject:"errorneous""
1 |
Abstract Uneducated Injustice: A Social Cognitive Approach to Understanding Juror Misconduct and Verdict ErrorsCalhoun, Melinee Melissa Marie 01 January 2015 (has links)
A continual problem in the adjudication of crime in the United States is the continued occurrence of erroneous convictions and acquittals. This problem impacts the victims of crimes as they endure emotional and mental distress of additional investigations and new trials. Defendants are impacted by errors in verdicts because of the loss of freedom while being factually innocent. These errors may occur because jurors may not be knowledgeable of their role, right and responsibilities. Without regard to the judge's minimum instruction, the jury is not provided direction on the purpose and limitations of their roles. Guided by the social cognitive theory, this correlational study examined the incorrect verdicts by jurors in 2 Georgia counties in order to evaluate whether pretrial training has an impact on the incidence of verdict error. An experimental design was used to evaluate the impact of juror training on the occurrence of erroneous convictions and acquittals. The study included 156 participants who were registered voters from Lowndes and Lanier County, Georgia. The variables training, verdict errors, and juror misconduct were analyzed using t test, Pearson correlation analysis, Levene's Test of Equality of Variances, and Chi square analysis. The findings indicated a significant inverse relationship between the administration of pre-trial training and the occurrence of verdict error. The results suggest a relationship between the occurrence of juror misconduct and erroneous convictions, which is consistent with impact of behavior on decision making as posited by SCT. The implications for positive social change include recommendations to Lowndes and Lanier County court administrators to consider routine pretrial training that includes information about the role of the juror in criminal trials.
|
2 |
Moderately cold indoor temperatures’ effect on human attention:Immediate decrease in inhibiting erroneous responsesJonsson, Anton, Hedman, Sandra January 2018 (has links)
The aim of the study was to investigate whether a moderately cold indoor temperature, 15.5+- °C, has a negative effect on human attention. This was investigated in an experiment where 40 participants (18 women, M = 23.5 years, age range 20–33 years) partook in three commonly used attention demanding cognitive tests, where half of the participants were tested in a normal room temperature environment around 20+-1 °C and the remaining participants in a cooler temperature of 15.5+-2 °C. The three tests that were used were the Stroop Test, Trail Making Test A and B as well as the Dot Cancellation Test. The results from the study suggest that attention is significantly affected in tests where rapid, correct responses are demanded, since the lower indoor temperature in particular significantly affected the performance in the Stroop Test. This effect is suggested to originate from a performance decrease when inhibiting erroneous responses. Additional to this it is interesting to observe that the test time was short, 15-20 minutes in the test environment, thus the effect has been shown to affect rather immediately, during a short time period.
|
3 |
La présomption de bonne foi / The presumption of good faithRifaï, Fadilé-Sylvie 04 December 2010 (has links)
La présomption de bonne foi a une valeur légale, puisqu’elle est consacrée par le législateur dans l’article 2274 du code civil. Cette thèse est consacrée à l’étude de la bonne foi-croyance erronée pour cerner son contenu et préciser son régime juridique, étant donné que cette notion est toujours accusée d’être floue et vague. La croyance erronée résulte des éléments objectifs matériels significatifs de vérité et invasifs de l’état d’esprit de sa victime. Le critère de la prise en considération et de la protection de cette dernière est la légitimité de la croyance erronée qui qualifie la bonne foi. Lorsque la croyance erronée est légitime, la présomption de bonne foi est consolidée et peut, par conséquent, déployer tous ses effets juridiques. La bonne foi qualifiée jouit, ainsi, d’un pouvoir protecteur et créateur de droits subjectifs qui porte atteinte à la puissance et à l’effectivité de la loi et de certains principes juridiques. La bonne foi a également une fonction fondatrice de certaines règles légales. Cependant, la puissance normative de la présomption de bonne foi consolidée n’est pas absolue ; elle est limitée par la préséance de certaines règles légales qui ne peuvent céder à la fonction créatrice et protectrice de la bonne foi qui est, ainsi, sacrifiée au profit de certains intérêts supérieurs. / The presumption of good faith has a legal value, because it is consecrated by the legislature in the section 2274 of the civil code. This thesis concerns only good faith-erroneous belief and tries to precise its content and juridical rule, as this notion is always accused to be blurred and vague. Erroneous belief is the result of objective material elements which are significant of trust and invasive of the state of mind of the victim. The erroneous belief needs a degree of legitimity in order to qualify the good faith and give rise to juridical protection. Where the erroneous belief is lawful, the presumption of good faith is consolidated and can spread all its juridical effects. The good faith has, therefore, a power of creation of rights. This power involves a breach of the law and of certain principles of the civil law. The good faith is also the basis of somme legal rules. However, the normative power of the good faith is not absolute ; it’s limited by the hold of some legal rules which sacrify the good faith in order to protect superior interests.
|
4 |
Processamento de dados de monitores de produtividade de cana-de-açúcar / Processing of data from sugarcane yield monitorsMaldaner, Leonardo Felipe 10 July 2017 (has links)
Na cultura da cana-de-açúcar, a colheita é realizada por uma colhedora que efetua o corte e processamento do produto colhido ao longo de uma (ou duas) fileira (s) da cultura estabelecida. Neste processo, dados obtidos por monitor de produtividade, quando existentes, fornecem informações com diferentes utilidades. Métodos existentes para o processamento de dados de produtividade utlizados atualmente foram desenvolvidos para conjuntos de dados de produtividade de grãos e quando aplicados a um conjunto de dados de produtividade de cana-de-açúcar podem eliminar dados com variações reais de produtividade dentro da fileira. O objetivo deste trabalho é desenvolver métodos que busquem identificar e remover dados errôneos, em pós-processamento, do conjunto de dados gerados por monitor de produtividade para caracterização das pequenas variações de produtividade dentro de uma fileira de cana-de-açúcar. A identificação de dados discrepantes do conjunto de dados utilizando método estatístico por quartis e uma filtragem comparando valores de produtividade usando somente dados de uma única passada da colhedora foi proposto. Foram utlizados quatro conjunto de dados de produtividade gerados por dois monitores. O monitor de produtividade 1 registrou os dados a uma frequência de 0,5 Hz e o monitor de produtividade 2 a uma frequência de 1 Hz. Foram encontrados dados errôneos gerados devido ao tempo de sincronização entre a colhedora e o conjunto transbordo durante as manobras de cabeceira e durante a troca do conjunto de transbordo. Também foram encontrados dados durante a manobras da colhedora, onde o monitor registrou dados com produtividade zero e nulas. Foram simuladas diferentes frequência de registro de dados com objetivo de verificar se a densidade de dados fornecida pelo monitor influência na caracterização de pequenas variações nos valores de produtividade dentro da passada. Os conjuntos de dados de produtividade gerados por diferentes tipos de monitores demostraram a necessidade de pós-processamento para remoção devalores de produtividades discrepantes. A metodologia desenvolvida neste trabalho foi capaz de identificar e eliminar os dados errôneos dos conjuntos de dados analisados. A metodologia de filtragem de dados considerando somente dados dentro de uma única passada da colhedora de cana-de-açúcar proporcionou a caracterização da variação de valores de produtividade em pequenas distâncias. / In the sugarcane crop, a harvest is performed by a harvester who cuts and processes the product harvested along one (or two) row (s) of the established crop. In this process, data from yield monitor, when applicable, provide information with different utilities. Existing methods for processing yield data currently used have been developed for datasets of yield grain and when applied to a sugarcane yield dataset can eliminate data with actual variations of yield within the row. The objective of this work is to develop methods that seek to identify and remove erroneous data, in post-processing, from the data set generated by yield monitor to characterize the small variations of yield within a row of sugarcane. The identification of outliers from the data set using statistical method for comparing quartiles and filtering yield values using only data from a single past the harvester has been proposed. Assay were utilized four yield dataset generated by two monitors. The yield monitor 1 recorded data at a frequency of 0.5 Hz and the yield monitor 2 at a frequency of 1 Hz. Erroneous data were found in the data set generated due to the time of synchronization between the sugarcane harvester and the transportation of chopped sugarcane during the headland turns and during the exchange of the transportation of chopped sugarcane during harvest. Were also found during the headland turns of the sugarcane harvester, where the yield monitor recorded data with values of yield zero and void. It was simulated different frequency of recording data with the objective of verifying if density of data provided by the monitor influences in the characterization of small variations in the yield values within the path. The yield data sets generated by different types of displays have demonstrated the need for post-processing to remove outliers in the yield dataset. The methodology developed in this study was able to identify and eliminate erroneous data sets analyzed data. Data filtering methodology considering only data within a single pass of the sugarcane harvester provided to characterize the variation in yield values over short distances.
|
5 |
Processamento de dados de monitores de produtividade de cana-de-açúcar / Processing of data from sugarcane yield monitorsLeonardo Felipe Maldaner 10 July 2017 (has links)
Na cultura da cana-de-açúcar, a colheita é realizada por uma colhedora que efetua o corte e processamento do produto colhido ao longo de uma (ou duas) fileira (s) da cultura estabelecida. Neste processo, dados obtidos por monitor de produtividade, quando existentes, fornecem informações com diferentes utilidades. Métodos existentes para o processamento de dados de produtividade utlizados atualmente foram desenvolvidos para conjuntos de dados de produtividade de grãos e quando aplicados a um conjunto de dados de produtividade de cana-de-açúcar podem eliminar dados com variações reais de produtividade dentro da fileira. O objetivo deste trabalho é desenvolver métodos que busquem identificar e remover dados errôneos, em pós-processamento, do conjunto de dados gerados por monitor de produtividade para caracterização das pequenas variações de produtividade dentro de uma fileira de cana-de-açúcar. A identificação de dados discrepantes do conjunto de dados utilizando método estatístico por quartis e uma filtragem comparando valores de produtividade usando somente dados de uma única passada da colhedora foi proposto. Foram utlizados quatro conjunto de dados de produtividade gerados por dois monitores. O monitor de produtividade 1 registrou os dados a uma frequência de 0,5 Hz e o monitor de produtividade 2 a uma frequência de 1 Hz. Foram encontrados dados errôneos gerados devido ao tempo de sincronização entre a colhedora e o conjunto transbordo durante as manobras de cabeceira e durante a troca do conjunto de transbordo. Também foram encontrados dados durante a manobras da colhedora, onde o monitor registrou dados com produtividade zero e nulas. Foram simuladas diferentes frequência de registro de dados com objetivo de verificar se a densidade de dados fornecida pelo monitor influência na caracterização de pequenas variações nos valores de produtividade dentro da passada. Os conjuntos de dados de produtividade gerados por diferentes tipos de monitores demostraram a necessidade de pós-processamento para remoção devalores de produtividades discrepantes. A metodologia desenvolvida neste trabalho foi capaz de identificar e eliminar os dados errôneos dos conjuntos de dados analisados. A metodologia de filtragem de dados considerando somente dados dentro de uma única passada da colhedora de cana-de-açúcar proporcionou a caracterização da variação de valores de produtividade em pequenas distâncias. / In the sugarcane crop, a harvest is performed by a harvester who cuts and processes the product harvested along one (or two) row (s) of the established crop. In this process, data from yield monitor, when applicable, provide information with different utilities. Existing methods for processing yield data currently used have been developed for datasets of yield grain and when applied to a sugarcane yield dataset can eliminate data with actual variations of yield within the row. The objective of this work is to develop methods that seek to identify and remove erroneous data, in post-processing, from the data set generated by yield monitor to characterize the small variations of yield within a row of sugarcane. The identification of outliers from the data set using statistical method for comparing quartiles and filtering yield values using only data from a single past the harvester has been proposed. Assay were utilized four yield dataset generated by two monitors. The yield monitor 1 recorded data at a frequency of 0.5 Hz and the yield monitor 2 at a frequency of 1 Hz. Erroneous data were found in the data set generated due to the time of synchronization between the sugarcane harvester and the transportation of chopped sugarcane during the headland turns and during the exchange of the transportation of chopped sugarcane during harvest. Were also found during the headland turns of the sugarcane harvester, where the yield monitor recorded data with values of yield zero and void. It was simulated different frequency of recording data with the objective of verifying if density of data provided by the monitor influences in the characterization of small variations in the yield values within the path. The yield data sets generated by different types of displays have demonstrated the need for post-processing to remove outliers in the yield dataset. The methodology developed in this study was able to identify and eliminate erroneous data sets analyzed data. Data filtering methodology considering only data within a single pass of the sugarcane harvester provided to characterize the variation in yield values over short distances.
|
6 |
Detection of erroneous payments utilizing supervised and utilizing supervised and unsupervised data mining techniquesYanik, Todd E. 09 1900 (has links)
Approved for public release; distribution in unlimited. / In this thesis we develop a procedure for detecting erroneous payments in the Defense Finance Accounting Service, Internal Review's (DFAS IR) Knowledge Base Of Erroneous Payments (KBOEP), with the use of supervised (Logistic Regression) and unsupervised (Classification and Regression Trees (C & RT)) modeling algorithms. S-Plus software was used to construct a supervised model of vendor payment data using Logistic Regression, along with the Hosmer-Lemeshow Test, for testing the predictive ability of the model. The Clementine Data Mining software was used to construct both supervised and unsupervised model of vendor payment data using Logistic Regression and C & RT algorithms. The Logistic Regression algorithm, in Clementine, generated a model with predictive probabilities, which were compared against the C & RT algorithm. In addition to comparing the predictive probabilities, Receiver Operating Characteristic (ROC) curves were generated for both models to determine which model provided the best results for a Coincidence Matrix's True Positive, True Negative, False Positive and False Negative Fractions. The best modeling technique was C & RT and was given to DFAS IR to assist in reducing the manual record selection process currently being used. A recommended ruleset was provided, along with a detailed explanation of the algorithm selection process. / Lieutenant Commander, United States Navy
|
7 |
Comparative Analysis of ChatGPT-4and Gemini Advanced in ErroneousCode Detection and CorrectionSun, Erik Wen Han, Grace, Yasine January 2024 (has links)
This thesis investigates the capabilities of two advanced Large Language Models(LLMs) OpenAI’s ChatGPT-4 and Google’s Gemini Advanced in the domain ofSoftware engineering. While LLMs are widely utilized across various applications,including text summarization and synthesis, their potential for detecting and correct-ing programming errors has not been thoroughly explored. This study aims to fill thisgap by conducting a comprehensive literature search and experimental comparisonof ChatGPT-4 and Gemini Advanced using the QuixBugs and LeetCode benchmarkdatasets, with specific focus on Python and Java programming languages. The re-search evaluates the models’ abilities to detect and correct bugs using metrics suchas Accuracy, Recall, Precision, and F1-score.Experimental results presets that ChatGPT-4 consistently outperforms GeminiAdvanced in both the detection and correction of bugs. These findings provide valu-able insights that could guide further research in the field of LLMs.
|
8 |
Automatic key discovery for Data Linking / Découverte des clés pour le Liage de DonnéesSymeonidou, Danai 09 October 2014 (has links)
Dans les dernières années, le Web de données a connu une croissance fulgurante arrivant à un grand nombre des triples RDF. Un des objectifs les plus importants des applications RDF est l’intégration de données décrites dans les différents jeux de données RDF et la création des liens sémantiques entre eux. Ces liens expriment des correspondances sémantiques entre les entités d’ontologies ou entre les données. Parmi les différents types de liens sémantiques qui peuvent être établis, les liens d’identité expriment le fait que différentes ressources réfèrent au même objet du monde réel. Le nombre de liens d’identité déclaré reste souvent faible si on le compare au volume des données disponibles. Plusieurs approches de liage de données déduisent des liens d’identité en utilisant des clés. Une clé représente un ensemble de propriétés qui identifie de façon unique chaque ressource décrite par les données. Néanmoins, dans la plupart des jeux de données publiés sur le Web, les clés ne sont pas disponibles et leur déclaration peut être difficile, même pour un expert.L’objectif de cette thèse est d’étudier le problème de la découverte automatique de clés dans des sources de données RDF et de proposer de nouvelles approches efficaces pour résoudre ce problème. Les données publiées sur le Web sont général volumineuses, incomplètes, et peuvent contenir des informations erronées ou des doublons. Aussi, nous nous sommes focalisés sur la définition d’approches capables de découvrir des clés dans de tels jeux de données. Par conséquent, nous nous focalisons sur le développement d’approches de découverte de clés capables de gérer des jeux de données contenant des informations nombreuses, incomplètes ou erronées. Notre objectif est de découvrir autant de clés que possible, même celles qui sont valides uniquement dans des sous-ensembles de données.Nous introduisons tout d’abord KD2R, une approche qui permet la découverte automatique de clés composites dans des jeux de données RDF pour lesquels l’hypothèse du nom Unique est respectée. Ces données peuvent être conformées à des ontologies différentes. Pour faire face à l’incomplétude des données, KD2R propose deux heuristiques qui per- mettent de faire des hypothèses différentes sur les informations éventuellement absentes. Cependant, cette approche est difficilement applicable pour des sources de données de grande taille. Aussi, nous avons développé une seconde approche, SAKey, qui exploite différentes techniques de filtrage et d’élagage. De plus, SAKey permet à l’utilisateur de découvrir des clés dans des jeux de données qui contiennent des données erronées ou des doublons. Plus précisément, SAKey découvre des clés, appelées "almost keys", pour lesquelles un nombre d’exceptions est toléré. / In the recent years, the Web of Data has increased significantly, containing a huge number of RDF triples. Integrating data described in different RDF datasets and creating semantic links among them, has become one of the most important goals of RDF applications. These links express semantic correspondences between ontology entities or data. Among the different kinds of semantic links that can be established, identity links express that different resources refer to the same real world entity. By comparing the number of resources published on the Web with the number of identity links, one can observe that the goal of building a Web of data is still not accomplished. Several data linking approaches infer identity links using keys. Nevertheless, in most datasets published on the Web, the keys are not available and it can be difficult, even for an expert, to declare them.The aim of this thesis is to study the problem of automatic key discovery in RDF data and to propose new efficient approaches to tackle this problem. Data published on the Web are usually created automatically, thus may contain erroneous information, duplicates or may be incomplete. Therefore, we focus on developing key discovery approaches that can handle datasets with numerous, incomplete or erroneous information. Our objective is to discover as many keys as possible, even ones that are valid in subparts of the data.We first introduce KD2R, an approach that allows the automatic discovery of composite keys in RDF datasets that may conform to different schemas. KD2R is able to treat datasets that may be incomplete and for which the Unique Name Assumption is fulfilled. To deal with the incompleteness of data, KD2R proposes two heuristics that offer different interpretations for the absence of data. KD2R uses pruning techniques to reduce the search space. However, this approach is overwhelmed by the huge amount of data found on the Web. Thus, we present our second approach, SAKey, which is able to scale in very large datasets by using effective filtering and pruning techniques. Moreover, SAKey is capable of discovering keys in datasets where erroneous data or duplicates may exist. More precisely, the notion of almost keys is proposed to describe sets of properties that are not keys due to few exceptions.
|
9 |
Detección de defectos en telas poliéster utilizando técnicas de procesamiento de imágenesAguilar Lara, Pedro Alexis, Tueros Gonzales, Jhon Jobany January 2015 (has links)
Este proyecto tiene como objetivo la implementación de algoritmos para la detección de defectos en las telas poliéster. Como sabemos, desde sus inicios la industria ha utilizado avances tecnológicos no sólo para optimizar los procesos de fabricación sino también para mejorar la calidad de los productos. Ahora, si bien, no es posible evitar las fallas que alteran la calidad de las telas poliéster, sí es posible su detección mediante una inspección visual dentro del proceso de fabricación.
En el presente estudio se realizó algoritmos de procesamiento de imágenes mediante el uso de librerías del software LabView para la detección de defectos en las telas poliéster, basándonos en muestras de telas con manchas comunes (MC), manchas de aceite (MA) y puntadas erróneas (PE), las cuales nos permitieron realizar varias pruebas experimentales, utilizando un módulo de pruebas a pequeña escala el cual fue fabricado según el tamaño de las muestras de tela, con la utilización de la técnica de iluminación lateral doble, y basándonos en el análisis del histograma de la imagen original de las muestras de telas, se lograron obtener parámetros numéricos que permitieron la detección de manchas comunes, manchas de aceite y puntadas erróneas, basado en el histograma de cada imagen, el cual muestra la cantidad de píxeles (tamaño de imagen) y la intensidad que se encuentra comprendido en un rango de 0-255 (siendo 0 el valor mínimo y 255 el valor máximo), se logró parametrizar numéricamente cada rango de valores de detección para el caso de MC un rango de valores de intensidad de cada pixel, obteniendo como resultado un intervalo de detección para MC de 0-195 y para el caso de las MA obteniendo como resultado un intervalo de detección de 167-194, que ayudaron en la realización del algoritmo para cada tipo de defecto, que validaron lo planteado en un inicio en la presente investigación.
This project takes the implementation of algorithms to detect faults in fabrics polyester. Now, though, it is not possible to avoid the faults that alter the quality of fabrics polyester, the detection is possible by means of a visual inspection of the product inside the manufacturing process.
This study carried out algorithms of image processing through the use of libraries from LabView software for detection of defects in fabrics polyester, based on samples of fabrics with Common Stains (MC), Oil Stains (MA) and Erroneous Stitches (PE), which allowed us to carry out several experimental tests, using a module of small scale tests which was manufactured according to the size of the fabric samples , using the technique of double side lighting, and based on histogram analysis, we have managed to obtain parameters that allowed the detection of Common Stains, Oil Stains and Erroneous Stitches. In the case of common stains, was parameterized numerically every range of detection values for the case of MC a range of values of each pixel intensity, resulting in a detection interval for 0-195 MC and the case of the MA resulting in a 167-194 detection interval.
|
10 |
Potential of Smart-Inhalers in Reducing Human and Economic Costs of Erroneous Inhaler Use / Potentialen för smarta inhalatorer att minska mänsklig och ekonomisk kostnad av felaktigt inhalatoranvändandeGrünfeld, Anton January 2022 (has links)
This thesis investigates the possibilities of increasing efficacy and general improvement of unsupervised medical treatments by implementing electronics and embedded systems (so-called smart devices) to allow the physician to monitor or track the treatment and adherence of the patient to it. The diseases in focus are respiratory: asthma and Chronic Obstructive lung [Pulmonary] Disease (COPD). This thesis will furthermore attempt to show that shortcomings in the current treatment of these diseases incur significant human costs by loss of quality of life for the patients and causes (avoidable) costs to health-care systems and societies on a macro-economic scale, both direct and indirect. It will find that the technology to create a smart-inhaler exists, and while not a panacea, it can address many of the identified issues with the current mode of treatment.This thesis was written in partnership with SHL Medical AB, and the author wishes to extend specialthanks to Plamen Balkandjiev and Mattias Myrman for their help, support, and patience. / Detta examensarbete undersöker möjligheterna att öka effektiviteten samt allmänna förbättringar av oövervakad medicinsk behandling genom implementering av elektronik och inbyggda system (så kallade smarta-apparater) för att möjliggöra för läkare att övervaka eller följa behandlingen samt huruvida patienten fullföljer den eller ej. Sjukdomarna i fokus är astma och Kronisk Obstruktiv Lungsjukdom (KOL). Vidare kommer detta arbete försöka visa att tillkortakommanden i den befintliga behandlingen av sjukdomar inte bara medför signifikanta minskningar i livskvalitet för patienten utan även orsakar (icke oundvikliga) kostnader för sjukvårdssystem och samhällen på en makro-ekonomisk skala, indirekt såsom direkt. Den kommer även visa att tekniken som krävs för att skapa en smartinhalator existerar, och medans denna inte är en panacé kan den likväl åtgärda många av de identifierade problemen med den befintliga behandlingsmetoden.Detta exmanensarbete skrevs i samarbete med SHL Medical AB och dess författare önskar utsträckaett särskilt tack till Plamen Balkandjiev och Mattias Myrman för deras hjälp, stöd och tålamod.
|
Page generated in 0.0422 seconds