• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 72
  • 28
  • 28
  • 18
  • 9
  • 9
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 488
  • 488
  • 488
  • 160
  • 136
  • 113
  • 111
  • 82
  • 78
  • 73
  • 73
  • 65
  • 63
  • 57
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

官員職等陞遷分類預測之研究 / Classification prediction on government official’s rank promotion

賴隆平, Lai, Long Ping Unknown Date (has links)
公務人員的人事陞遷是一個複雜性極高,其中隱藏著許多不變的定律及過程,長官與部屬、各公務人員人之間的關係,更是如同蜘蛛網狀般的錯綜複雜,而各公務人員的陞遷狀況,更是隱藏著許多派系之間的鬥爭拉扯連動,或是提攜後進的過程,目前透過政府公開的總統府公報-總統令,可以清楚得知所有公務人員的任職相關資料,其中包含各職務之間的陞遷、任命、派免等相關資訊,而每筆資料亦包含機關、單位、職稱及職等資料,可以提供各種研究使用。 本篇係整理出一種陞遷序列的資料模型來進行研究,透過資料探勘的相關演算法-支撐向量機(Support Vector Machine,簡稱SVM)及決策樹(Decision Tree)的方式,並透過人事的領域知識加以找出較具影響力的屬性,來設計實驗的模型,並使用多組模型及多重資料進行實驗,透過整體平均預測結果及圖表方式來呈現各類別的預測狀況,再以不同的屬性資料來運算產生其相對結果,來分析其合理性,最後再依相關數據來評估此一方法的合理及可行性。 透過資料探勘設計的分類預測模型,其支撐向量機與決策樹都具有訓練量越大,展現之預測結果也愈佳之現象,這跟一般模型是相同的,而挖掘的主管職務屬性參數及關鍵屬性構想都跟人事陞遷的邏輯不謀而合,而預測結果雖各有所長,但整體來看則為支撐向量機略勝一籌,惟支撐向量機有一狀況,必須先行排除較不具影響力之屬性參數資料,否則其產生超平面的邏輯運算過程將產生拉扯作用,導致影響其預測結果;而決策樹則無是類狀況,且其應用較為廣泛,可以透過宣告各屬性值的類型,來進行不同屬性資料類型的分類實驗。 而透過支撐向量機與決策樹的產生的預測結果,其正確率為百分之77至82左右,如此顯示出國內中高階文官的陞遷制度是有脈絡可循的,其具有一定的制度規範及穩定性,而非隨意的任免陞遷;如此透過以上資料探勘的應用,藉著此特徵研究提供公務部門在進行人力資源管理、組織發展、陞遷發展以及組織部門精簡規劃上,作為調整設計參考的一些相關資訊;另透過一些相關屬性的輸入,可提供尚在服務的公務人員協助其預估陞遷發展的狀況,以提供其進行相關生涯規劃。 / The employee promotion is a highly complexity task in Government office, it include many invariable laws and the process, between the senior officer and the subordinate, various relationships with other government employees, It’s the similar complex with the spider lattice, and it hides many clique's struggles in Government official’s promotion, and help to process the promote for the junior generation, through the government public presidential palace - presidential order, it‘s able to get clearly information about all government employees’ correlation data, include various related information like promotion, recruitment , and each data also contains the instruction, like the job unit, job title and job rank for all research reference. It organizes a promoted material model to conduct the research, by the material exploration's related calculating method – Support Vector Machine (SVM) and the decision tree, and through by knowledge of human resource to discover the influence to design the experiment's model, and uses the multi-group models and materials to process, and by this way , it can get various categories result by overall average forecasting and the graph, then operates by different attribute material to get relative result and analyzes its rationality, finally it depends on the correlation data to re-evaluate its method reasonable and feasibility. To this classification forecast model design, the SVM and the decision tree got better performance together with the good training quality, it’s the same with the general model, and it’s the same view to find the details job description for senior management and employee promotion, however the forecasting result has their own strong points, but for the totally, the SVM is slightly better, only if any accidents occurred, it needs to elimination the attribute parameter material which is not have the big influence, otherwise it will have the planoid logic operation process to produce resist status, and will affect its forecasting result, but the decision tree does not have this problem, and its application is more widespread, it can through by different type to make the different experiment. The forecasting result through by SVM and decision tree, its correction percentage can be achieved around 77% - 82% , so it indicated the high position level promotion policy should be have its own rules to follow, it has certain system standard and the stability, but non-optional promoted, so trough by the above data mining, follow by this characteristic to provide Government office to do the Human resource management, organization development, employee promotion and simplify planning to the organization, takes the re-design information for reference, In addition through by some related attribute input, it may provide the government employee who is still on duty and assist them to evaluate promotion development for future career plan.
252

Improving computational predictions of Cis-regulatory binding sites in genomic data

Rezwan, Faisal Ibne January 2011 (has links)
Cis-regulatory elements are the short regions of DNA to which specific regulatory proteins bind and these interactions subsequently influence the level of transcription for associated genes, by inhibiting or enhancing the transcription process. It is known that much of the genetic change underlying morphological evolution takes place in these regions, rather than in the coding regions of genes. Identifying these sites in a genome is a non-trivial problem. Experimental (wet-lab) methods for finding binding sites exist, but all have some limitations regarding their applicability, accuracy, availability or cost. On the other hand computational methods for predicting the position of binding sites are less expensive and faster. Unfortunately, however, these algorithms perform rather poorly, some missing most binding sites and others over-predicting their presence. The aim of this thesis is to develop and improve computational approaches for the prediction of transcription factor binding sites (TFBSs) by integrating the results of computational algorithms and other sources of complementary biological evidence. Previous related work involved the use of machine learning algorithms for integrating predictions of TFBSs, with particular emphasis on the use of the Support Vector Machine (SVM). This thesis has built upon, extended and considerably improved this earlier work. Data from two organisms was used here. Firstly the relatively simple genome of yeast was used. In yeast, the binding sites are fairly well characterised and they are normally located near the genes that they regulate. The techniques used on the yeast genome were also tested on the more complex genome of the mouse. It is known that the regulatory mechanisms of the eukaryotic species, mouse, is considerably more complex and it was therefore interesting to investigate the techniques described here on such an organism. The initial results were however not particularly encouraging: although a small improvement on the base algorithms could be obtained, the predictions were still of low quality. This was the case for both the yeast and mouse genomes. However, when the negatively labeled vectors in the training set were changed, a substantial improvement in performance was observed. The first change was to choose regions in the mouse genome a long way (distal) from a gene over 4000 base pairs away - as regions not containing binding sites. This produced a major improvement in performance. The second change was simply to use randomised training vectors, which contained no meaningful biological information, as the negative class. This gave some improvement over the yeast genome, but had a very substantial benefit for the mouse data, considerably improving on the aforementioned distal negative training data. In fact the resulting classifier was finding over 80% of the binding sites in the test set and moreover 80% of the predictions were correct. The final experiment used an updated version of the yeast dataset, using more state of the art algorithms and more recent TFBSs annotation data. Here it was found that using randomised or distal negative examples once again gave very good results, comparable to the results obtained on the mouse genome. Another source of negative data was tried for this yeast data, namely using vectors taken from intronic regions. Interestingly this gave the best results.
253

Restructuring the socially anxious brain : Using magnetic resonance imaging to advance our understanding of effective cognitive behaviour therapy for social anxiety disorder / Hjärnan formas av psykologisk behandling

Månsson, Kristoffer N. T. January 2016 (has links)
Social anxiety disorder (SAD) is a common psychiatric disorder associated with considerable suffering. Cognitive behaviour therapy (CBT) has been shown to be effective but a significant proportion does not respond or relapses, stressing the need of augmenting treatment. Using neuroimaging could elucidate the psychological and neurobiological interaction and may help to improve current therapeutics. To address this issue, functional and structural magnetic resonance imaging (MRI) were repeatedly conducted on individuals with SAD randomised to receive CBT or an active control condition. MRI was performed pre-, and post-treatment, as well as at one-year follow-up. Matched healthy controls were also scanned to be able to evaluate disorder-specific neural responsivity and structural morphology. This thesis aimed at answering three major questions. I) Does the brain’s fear circuitry (e.g., the amygdala) change, with regard to neural response and structural morphology, immediately after CBT? II) Are the immediate changes in the brain still present at long-term follow-up? III) Can neural responsivity in the fear circuitry predict long-term treatment outcome at the level of the individual? Thus, different analytic methods were performed. Firstly, multimodal neuroimaging addressed questions on concomitant changes in neural response and grey matter volume. Secondly, two different experimental functional MRI tasks captured both neural response to emotional faces and self-referential criticism. Thirdly, support vector machine learning (SVM) was used to evaluate neural predictors at the level of the individual. Amygdala responsivity to self-referential criticism was found to be elevated in individuals with SAD, as compared to matched healthy controls, and the neural response was attenuated after effective CBT. In individuals with SAD, amygdala grey matter volume was positively correlated with symptoms of anticipatory speech anxiety, and CBT-induced symptom reduction was associated with decreased grey matter volume of the amygdala. Also, CBT-induced reduction of amygdala grey matter volume was evident both at short- and long-term follow-up. In contrast, the amygdala neural response was weakened immediately after treatment, but not at one-year follow-up. In extension to treatment effects on the brain, pre-treatment connectivity between the amygdala and the dorsal anterior cingulate cortex (dACC) was stronger in long-term CBT non-responders, as compared to long-term CBT responders. Importantly, by use of an SVM algorithm, pre-treatment neural response to self-referential criticism in the dACC accurately predicted (&gt;90%) the clinical response to CBT. In conclusion, modifying the amygdala is a likely mechanism of action in CBT, underlying the anxiolytic effects of this treatment, and the brain’s neural activity during self-referential criticism may be an accurate and clinically relevant predictor of the long-term response to CBT. Along these lines, neuroimaging is a vital tool in clinical psychiatry that could potentially improve clinical decision-making based on an individual’s neural characteristics. / Social ångest är en av de vanligaste psykiska sjukdomarna. Mer än en miljon svenskar bedöms lida av detta. Social ångest leder ofta till svåra konsekvenser för den som drabbas, men även ökade kostnader för samhället har noterats, t ex i form av ökad sjukfrånvaro. Även om många som drabbas inte söker hjälp så finns effektiva behandlingar för social ångest, både farmakologiska och psykologiska behandlingar rekommenderas av Socialstyrelsen. Kognitiv beteendeterapi (KBT) är en evidensbaserad och rekommenderad psykologisk behandling för social ångest. Trots att nuvarande interventioner är effektiva så är det fortfarande en andel individer som inte blir förbättrade. Det finns en stor andel studier som visar att individer med social ångest, i jämförelse med friska individer, karakteriseras av överdriven aktivitet i ett nätverk som har till uppgift att tolka och reagera på hotfull information. Denna aktivitet är lokaliserad i rädslonätverket där området amygdala spelar en central roll. Det finns ett behov att utveckla nuvarande behandlingar och denna avhandling syftar till att öka vår förståelse för en neurobiologisk verkningsmekanism bakom KBT för social ångest. I detta forskningsprojekt har magnetresonanstomografi (MRT) använts för att undersöka personer som lider av social ångest. Upprepade mätningar har genomförts, innan, efter, och vid uppföljning ett år efter ångestlindrande behandling. Utöver detta har individer som inte lider av social ångest undersökts för att förstå hur patienter skiljer sig från friska personer, men också för att undersöka om behandlingen normaliserar patientens hjärna. Under tiden som deltagarna undersöktes med MRT genomfördes två experiment för att ta reda på hur hjärnan reagerar på affektiv information. Deltagarna tittade på bilder med ansikten som uttrycker emotioner, t ex arga och rädda ansiktsuttryck, samt information som innehöll kritiska kommentarer riktade till personen själv eller någon annan, t ex ”ingen tycker om dig” eller ”hon är inkompetent”. Strukturella bilder på deltagarnas hjärnor har också samlats in vid varje mättillfälle. Utöver detta fick alla deltagare instruktioner om att de efter MRT skulle hålla en muntlig presentation inför en publik. Denna uppgift är oftast den värsta tänkbara för individer med social ångest, och syftet med uppgiften var att relatera hjärnans struktur och aktivitet till hur mycket ångest som individerna upplevde inför denna situation. I arbetet med denna avhandling har tre frågor ställts. a) Uppstår strukturella och funktionella förändringar i rädslonätverket direkt efter avslutad KBT (Studie I och II)? b) Är de tidiga förändringarna efter behandlingen även kvarstående ett år senare (Studie III)? c) Kan hjärnans reaktioner i rädslonätverket förutspå vilka individer som kommer att bli förbättrade av en ångestlindrande psykologisk behandling på lång sikt? Resultat från studierna i denna avhandling sammanfattas nedan: Reaktioner till självriktad kritik i amygdala är överdrivna hos individer med social ångest, i jämförelse med friska individer Reaktioner i amygdala minskar efter att individerna blivit behandlade med KBT och minskningarna korrelerar till minskade symptom av social ångest Den strukturella volymen av amygdala korrelerar positivt med hur mycket ångest individerna upplever inför en muntlig presentation, och minskningen av dessa symptom korrelerar även med hur mycket volymen av amygdala minskar efter KBT Minskningen av amygdalavolym och den samtidigt minskade reaktiviteten i amygdala till självriktad kritik är korrelerade. Medieringsanalyser antyder att det är den minskade volymen som driver förhållandet mellan minskad reaktivitet och minskad ångest inför att hålla en muntlig presentation Den strukturella minskningen av amygdala ses både direkt efter behandlingens avslut, men även vid uppföljning ett år senare. Hjärnans reaktivitet till självriktad kritik i amygdala minskar direkt efter behandling, men är inte kvarstående vid uppföljning ett år senare Kopplingen mellan hjärnans reaktivitet till självriktad kritik i amygdala och dorsala främre cingulum var starkare hos de som inte blev förbättrade (jämfört med de som blev bättre) av en ångestlindrande behandling på lång sikt Med hjälp av en stödvektormaskin (en. support vector machine learning) och ett mönster av hjärnaktivitet i dorsala främre cingulum innan behandling påbörjades, predicerades (med 92% träffsäkerhet) vilka individer som ett år senare var fortsatt förbättrade av en effektiv psykologisk behandling Utifrån dessa observationer är slutsatserna att strukturell och funktionell påverkan på amygdala är en möjlig neurobiologisk mekanism för minskad social ångest efter KBT, samt att reaktivitet i främre cingulum kan ge kliniskt relevant data om vem som kommer att bli förbättrad av en psykologisk behandling. Denna information kan potentiellt vara viktig i framtidens psykiatri för att utveckla existerande behandlingar, men även för att stödja klinikers beslutsfattande huruvida en viss individ bör erbjudas en specifik behandling eller ej. / <p>Illustration on the cover by Jan Lööf. Cover image printed with permission from Jan Lööf and Bonnier Carlsen Förlag. The cover was art directed by Staffan Lager.</p><p>The thesis is reprinted and the previous ISBN was 9789176856888.</p>
254

Microbial phenomics information extractor (MicroPIE): a natural language processing tool for the automated acquisition of prokaryotic phenotypic characters from text sources

Mao, Jin, Moore, Lisa R., Blank, Carrine E., Wu, Elvis Hsin-Hui, Ackerman, Marcia, Ranade, Sonali, Cui, Hong 13 December 2016 (has links)
Background: The large-scale analysis of phenomic data (i.e., full phenotypic traits of an organism, such as shape, metabolic substrates, and growth conditions) in microbial bioinformatics has been hampered by the lack of tools to rapidly and accurately extract phenotypic data from existing legacy text in the field of microbiology. To quickly obtain knowledge on the distribution and evolution of microbial traits, an information extraction system needed to be developed to extract phenotypic characters from large numbers of taxonomic descriptions so they can be used as input to existing phylogenetic analysis software packages. Results: We report the development and evaluation of Microbial Phenomics Information Extractor (MicroPIE, version 0.1.0). MicroPIE is a natural language processing application that uses a robust supervised classification algorithm (Support Vector Machine) to identify characters from sentences in prokaryotic taxonomic descriptions, followed by a combination of algorithms applying linguistic rules with groups of known terms to extract characters as well as character states. The input to MicroPIE is a set of taxonomic descriptions (clean text). The output is a taxon-by-character matrix-with taxa in the rows and a set of 42 pre-defined characters (e.g., optimum growth temperature) in the columns. The performance of MicroPIE was evaluated against a gold standard matrix and another student-made matrix. Results show that, compared to the gold standard, MicroPIE extracted 21 characters (50%) with a Relaxed F1 score > 0.80 and 16 characters (38%) with Relaxed F1 scores ranging between 0.50 and 0.80. Inclusion of a character prediction component (SVM) improved the overall performance of MicroPIE, notably the precision. Evaluated against the same gold standard, MicroPIE performed significantly better than the undergraduate students. Conclusion: MicroPIE is a promising new tool for the rapid and efficient extraction of phenotypic character information from prokaryotic taxonomic descriptions. However, further development, including incorporation of ontologies, will be necessary to improve the performance of the extraction for some character types.
255

Intelligent Decision Support Systems for Compliance Options : A Systematic Literature Review and Simulation

PATTA, SIVA VENKATA PRASAD January 2019 (has links)
The project revolves around logistics and its adoption to the new rules. Theobjective of this project is to focus on minimizing data tampering to the lowest level possible.To achieve the set goals in this project, Decision support system and simulation havebeen used. However, to get clear insight about how they can be implemented, a systematicliterature review (Case Study incl.) has been conducted, followed by interviews with personnelat Kakinada port to understand the real-time complications in the field. Then, a simulatedexperiment using real-time data from Kakinada port has been conducted to achieve the set goalsand improve the level of transparency on all sides i.e., shipper, port and terminal.
256

Surveillance de l’état de santé des actionneurs électromécaniques : application à l'aéronautique / Health-monitoring of electromechanical actuators : application to aeronautics

Breuneval, Romain 21 December 2017 (has links)
L’industrie aéronautique fait face à trois enjeux majeurs : la réduction de son empreinte environnementale, l’absorption de l’augmentation du trafic et le maintien d’un haut niveau de sécurité pour des systèmes de plus en plus complexes, à coûts équivalents. La maintenance prédictive permet de répondre en partie à ces trois enjeux. Les systèmes, dont on peut prédire la durée de vie peuvent être utilisés plus longtemps, ce qui diminue le nombre de composants utilisés sur la vie d’un avion. Prédire les pannes permet également d’augmenter la disponibilité des aéronefs en évitant les arrêts non planifiés. Enfin, le suivi de l’état de vieillissement de l’avion permet d’optimiser la maintenance et ainsi de réduire les coûts. Dans les années 2000, ces méthodologies ont été appliquées sur les moteurs. Elles commencent maintenant à se généraliser aux autres systèmes avioniques. Ainsi cette thèse concerne la mise au point de méthodologies amenant à la maintenance prédictive d’actionneurs électro – mécaniques (EMA) de commande de vol. Les problématiques et les contraintes (temps de calculs, quantités de mémoire…) liées à cette thématique sont détaillées. Dans un premier temps, le calcul de signatures de défauts est abordé. Une méthode pour les systèmes visécrou, basée sur l’identification d’un modèle de frottement, est proposée. Une deuxième méthode, reposant sur l’analyse des courants à partir d’une combinaison de décomposition modale empirique ensembliste complète et d’analyse aveugle de sources, est ensuite introduite. Ces deux méthodes sont testées sur des données issues de profils non opérationnels. Ces données sont issues d’un modèle de simulation représentant finement l’actionneur dans son environnement. L’ensemble des simulations représente des essais virtuels sur une population d’EMA. A partir de ces simulations, les signatures mises au point sont calculées. Puis, afin de valider ces signatures, des métriques de performances sont calculées. Le diagnostic par reconnaissance de formes est ensuite traité. Un algorithme reposant sur une combinaison de machine à vecteur de supports et de fonctions floues d’appartenances est proposé. Celui-ci peut notamment estimer la sévérité d’un défaut. Il permet également de détecter des points ne correspondant pas à la base d’apprentissage, qui peuvent représenter des défauts inconnus ou des points appartenant à plusieurs classes à la fois, pouvant représenter des cas de défauts combinés. L’architecture d’un système de diagnostic complet, basée sur l’algorithme conçu, est détaillée. Des validations expérimentales des méthodes de calcul de signatures et de diagnostic sont ensuite menées. Ces validations reposent sur trois bases issues de trois campagnes d’essais. La première repose sur des essais d’un EMA sain sur un banc représentatif. La deuxième concerne un moteur asynchrone en défaut en régime permanent. La dernière porte sur un moteur synchrone à aimants, de type aéronautique, en défaut de courtcircuit inter-spires en régime permanent. Le respect des contraintes par l’algorithme est vérifié. Enfin, des éléments pour aller vers le pronostic sont avancés. Le processus du pronostic est détaillé. Seule une partie de ce processus est traitée, sur des données issues de vieillissement de roulements. Dans un premier temps, le partitionnement de données de vieillissement pour créer des classes de sévérité de défaut est étudié. Cette tâche a amené à proposer une métrique, dite de cohérence temporelle, permettant de vérifier qu’un résultat de partitionnement satisfait aux contraintes pour le pronostic. Puis l’algorithme de classification proposé est validé sur les données partitionnées. Ceci amène à distinguer deux méthodes de validation, une approche dite diagnostic et une dite pronostic. Une méthode de normalisation, pour l’approche pronostic, est proposée. La prédiction des signatures dans le futur est ensuite traitée. Un algorithme de régression par vecteurs de support est utilisé [etc...] / The aeronautics industry is facing three major challenges: the reduction of its environmental impact, the absorption to the air traffic increase and a high level of safety for increasing complex systems, for equivalent costs. Predictive maintenance allows answering to these issues. Systems, for which the life can be predicted, can be used for a longer time. This reduces the number of components used in the lifetime of an aircraft. To predict failures also allow increasing the availability of aircrafts by avoiding unplanned downtime. Finally, monitoring the ageing of the aircraft allows to optimize maintenance and so to reduce costs. In the 2000s, these methodologies were applied to turbojets. It starts, now, to be generalized to others avionics systems. Therefore, this work deal with predictive maintenance methodologies for electromechanical actuator (EMA) for flight controls systems. Problems and constraints (computation time, memory quantities…) related to this subject are detailed. In a first part, fault feature computation methodologies are investigated. A first method is proposed for screw/nut systems. This method is based on the identification of a friction model. A second method, based on current analysis, is presented. This method uses a combination of empirical mode decomposition and independent component analysis. The two methods are tested on data from a non-operational profile. This data are from a simulation model which represents the EMA in the aircraft environment. The simulations performed represent virtual trials on a population of EMA. From these simulations, fault features are computed. Then, performances metrics are evaluated. Diagnosis by pattern recognition is then studied. An algorithm based on support vector machine and fuzzy membership functions is proposed. This algorithm can estimate the severity of a fault. It can also detect unknown observations, which can represent unknown faults or combined faults. The architecture of a global diagnosis system, based on the proposed algorithm, is detailed. Experimental validation of fault features computation and diagnosis algorithm is performed. These validations are based on three data bases. The first one is based on trials performed on a healthy EMA on a representative bench. The second consists in an induction motor at constant speed for different types of faults. The last one is from trials on a permanent magnet synchronous machine, of aeronautics type, for different kinds of short – circuit fault severities. The respect of the aeronautics constraints is verified. At last, elements for prognosis process are given. This process is detailed. Only a part of this process is treated, on a roller bearings benchmark database. First, the clustering for prognosis is studied. A metric, which allows verifying that the obtained clusters are coherent regarding time, and thus, checks the constraints for prognosis, is given. Then the proposed diagnosis algorithm is validated on the clustered data. This brought to perform two kinds of validation, a diagnosis oriented one and a prognosis oriented one. A method to normalize data for the prognosis oriented validation, based on sigmoid function, is given. The prediction of the features in the future is studied. A regression algorithm based on support vector regression is used. Finally, the diagnosis algorithm is applied to the predicted data. This allows to estimate the end of life, and so the remaining useful life for a given time. These estimations are evaluated regarding different kinds of performance metrics and regarding the constraints of the aeronautics applicative field
257

A influência do contexto de discurso na segmentação automática das fases do gesto com aprendizado de máquina supervisionado / The influence of the speech context on the automatic segmentation of the phases of the gesture with supervised machine learning

Rocha, Jallysson Miranda 27 April 2018 (has links)
Gestos são ações que fazem parte da comunicação humana. Frequentemente, eles ocorrem junto com a fala e podem se manifestar por uma ação proposital, como o uso das mãos para explicar o formato de um objeto, ou como um padrão de comportamento, como coçar a cabeça ou ajeitar os óculos. Os gestos ajudam o locutor a construir sua fala e também ajudam o ouvinte a compreender a mensagem que está sendo transmitida. Pesquisadores de diversas áreas são interessados em entender como se dá a relação dos gestos com outros elementos do sistema linguístico, seja para suportar estudos das áreas da Linguística e da Psicolinguística, seja para melhorar a interação homem-máquina. Há diferentes linhas de estudo que exploram essa temática e entre elas está aquela que analisa os gestos a partir de fases: preparação, pré-stroke hold, stroke, pós-stroke hold, hold e retração. Assim, faz-se útil o desenvolvimento de sistemas capazes de automatizar a segmentação de um gesto em suas fases. Técnicas de aprendizado de máquina supervisionado já foram aplicadas a este problema e resultados promissores foram obtidos. Contudo, há uma dificuldade inerente à análise das fases do gesto, a qual se manifesta na alteração do contexto em que os gestos são executados. Embora existam algumas premissas básicas para definição do padrão de manifestação de cada fase do gesto, em contextos diferentes tais premissas podem sofrer variações que levariam a análise automática para um nível de alta complexidade. Este é o problema abordado neste trabalho, a qual estudou a variabilidade do padrão inerente à cada uma das fases do gesto, com apoio de aprendizado de máquina, quando a manifestação delas se dá a partir de um mesmo indivíduo, porém em diferentes contextos de produção do discurso. Os contextos de discurso considerados neste estudo são: contação de história, improvisação, descrição de cenas, entrevistas e aulas expositivas / Gestures are actions that make part of human communication. Commonly, gestures occur at the same time as the speech and they can manifest either through an intentional act, as using the hands to explain the format of an object, or as a pattern of behavior, as scratching the head or adjusting the glasses. Gestures help the speaker to build their speech and also help the audience to understand the message being communicated. Researchers from several areas are interested in understanding what the relationship of gestures with other elements of the linguistic system is like, whether in supporting studies in Linguistics or Psycho linguistics, or in improving the human-machine interaction. There are different lines of study that explore such a subject, and among them is the line that analyzes gestures according to their phases: preparation, pre-stroke hold, stroke, post-stroke hold, hold and retraction. Thus, the development of systems capable of automating the segmentation of gestures into their phases can be useful. Techniques that implement supervised machine learning have already been applied in this problem and promising results have been achieved. However, there is an inherent difficulty to the analysis of phases of gesture that is revealed when the context (in which the gestures are performed) changes. Although there are some elementary premises to set the pattern of expression of each gesture phase, such premises may vary and lead the automatic analysis to high levels of complexity. Such an issue is addressed in the work herein, whose purpose was to study the variability of the inherent pattern of each gesture phase, using machine learning techniques, when their execution is made by the same person, but in different contexts. The contexts of discourse considered in this study are: storytelling, improvisation, description of scenes, interviews and lectures
258

Evaluation de l'adhérence au contact roue-rail par analyse d'images spectrales / Wheel-track adhesion evaluation using spectral imaging

Nicodeme, Claire 04 July 2018 (has links)
L’avantage du train depuis sa création est sa faible résistance à l’avancement du fait du contact fer-fer de la roue sur le rail conduisant à une adhérence réduite. Cependant cette adhérence faible est aussi un inconvénient majeur : étant dépendante des conditions environnementales, elle est facilement altérée lors d’une pollution du rail (végétaux, corps gras, eau, etc.). Aujourd’hui, les mesures prises face à des situations d'adhérence dégradée impactent directement les performances du système et conduisent notamment à une perte de capacité de transport. L’objectif du projet est d’utiliser les nouvelles technologies d’imagerie spectrale pour identifier sur les rails les zones à adhérence réduite et leur cause afin d’alerter et d’adapter rapidement les comportements. La stratégie d’étude a pris en compte les trois points suivants : • Le système de détection, installé à bord de trains commerciaux, doit être indépendant du train. • La détection et l’identification ne doivent pas interagir avec la pollution pour ne pas rendre la mesure obsolète. Pour ce faire le principe d’un Contrôle Non Destructif est retenu. • La technologie d’imagerie spectrale permet de travailler à la fois dans le domaine spatial (mesure de distance, détection d’objet) et dans le domaine fréquentiel (détection et reconnaissance de matériaux par analyse de signatures spectrales). Dans le temps imparti des trois ans de thèse, nous nous sommes focalisés sur la validation du concept par des études et analyses en laboratoire, réalisables dans les locaux de SNCF Ingénierie & Projets. Les étapes clés ont été la réalisation d’un banc d’évaluation et le choix du système de vision, la création d'une bibliothèque de signatures spectrales de référence et le développement d'algorithmes classification supervisées et non supervisées des pixels. Ces travaux ont été valorisés par le dépôt d'un brevet et la publication d'articles dans des conférences IEEE. / The advantage of the train since its creation is in its low resistance to the motion, due to the contact iron-iron of the wheel on the rail leading to low adherence. However this low adherence is also a major drawback : being dependent on the environmental conditions, it is easily deteriorated when the rail is polluted (vegetation, grease, water, etc). Nowadays, strategies to face a deteriorated adherence impact the performance of the system and lead to a loss of transport capacity. The objective of the project is to use a new spectral imaging technology to identify on the rails areas with reduced adherence and their cause in order to quickly alert and adapt the train's behaviour. The study’s strategy took into account the three following points : -The detection system, installed on board of commercial trains, must be independent of the train. - The detection and identification process should not interact with pollution in order to keep the measurements unbiased. To do so, we chose a Non Destructive Control method. - Spectral imaging technology makes it possible to work with both spatial information (distance’s measurement, target detection) and spectral information (material detection and recognition by analysis of spectral signatures). In the assigned time, we focused on the validation of the concept by studies and analyses in laboratory, workable in the office at SNCF Ingénierie & Projets. The key steps were the creation of the concept's evaluation bench and the choice of a Vision system, the creation of a library containing reference spectral signatures and the development of supervised and unsupervised pixels classification. A patent describing the method and process has been filed and published.
259

DETERMINAÇÃO DE MODELO DE ESTIMATIVA DE TEORES DE CARBONO EM SOLOS UTILIZANDO MÁQUINA DE VETOR DE SUPORTE E REFLECTÂNCIA ESPECTRAL

Teixeira, Sandro 31 July 2014 (has links)
Made available in DSpace on 2017-07-21T14:19:22Z (GMT). No. of bitstreams: 1 Sandro Teixeira.pdf: 611887 bytes, checksum: da75c60dae366a84db89509883f57db4 (MD5) Previous issue date: 2014-07-31 / Considered a quality indicator, carbon constitutes an important attribute in the productive capacity of the soil. However the traditional methodologies used for determining carbon cause environmental problems due to the use of chemical reagents. The replacement of this procedure by others that generate little or no amount of toxic waste has been considered important. Spectroscopy is one of the promising techniques in Precision Agriculture for soil analysis and can be used to estimate carbon content. Among its benefits, highlights the sample preservation, no consumption of reagents, and their efficiency acquiring data from a large number of samples. The aim of this work was to contribute to determine a regression model able to predict the carbon content in soil samples using spectroscopy in the visible and near infrared region. The Machine Learning SVM technique available in the WEKA software was used to create the model. Because of their generalization ability SVM has been considered a better alternative than the other methods of multivariate regression. Two sets of soil samples collected in the Campos Gerais region were used to the experiments. The results evaluation was based on the forecast errors and the correlation coefficients between the values carbon content predicted by the model. Correlation coefficients ranging from 0.84 to 0.90 were found. It was concluded that the NIRS-vis spectroscopy combined with SVM technique can be recommended as an alternative to conventional methods for carbon analysis in the soil. / Considerado um indicador de qualidade, o carbono constitui-se em um importante atributo na capacidade produtiva do solo. Porém, as tradicionais metodologias empregadas para sua determinação geram problemas ambientais devido ao uso de reagentes químicos. Diante disso, a substituição desse procedimento por outros que gerem menor ou nenhuma quantidade de resíduos tóxicos tem sido considerada relevante. A espectroscopia é uma das técnicas promissora na Agricultura de Precisão para análises de solos e que pode trazer uma solução viável para análise de teor de carbono. Dentre suas vantagens, destaca-se a preservação da amostra, o não consumo de reagentes, além de sua eficiência na aquisição de dados provenientes de um grande número de amostras. O objetivo deste trabalho foi contribuir com um modelo de regressão capaz de predizer a quantidade de carbono em amostras de solo utilizando a espectroscopia na região do visível e no infravermelho próximo. Para tanto, foi utilizada a técnica de Aprendizagem de Máquina SVM incorporada ao software WEKA como auxílio na criação do modelo. A SVM tem representado uma alternativa melhor aos já consagrados métodos de regressão multivariada por apresentar capacidade de generalização. Nos experimentos realizados foram utilizados dois conjuntos de amostras de solo coletadas na região dos Campos Gerais. A avaliação dos resultados teve como base os erros de previsão e os coeficientes de correlação entre os valores dos teores de carbono preditos pelo modelo. Foram encontrados coeficientes de correlação que variaram entre 0,84 a 0,90. Concluiu-se que a espectroscopia no vis-NIRS aliada à técnica SVM é recomendada como uma alternativa aos métodos convencionais de análise de carbono em solos.
260

Tempo de atravessamento de ações cíveis na Justiça Federal de 2o grau: ajuste de modelos baseados em redes neurais e máquina de vetor suporte

Gruginskie, Lúcia Adriana dos Santos 03 September 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-12-19T16:04:28Z No. of bitstreams: 1 Lúcia Adriana dos Santos Gruginskie_.pdf: 9867712 bytes, checksum: bfd43e9da7c12799c9184df87464cff6 (MD5) / Made available in DSpace on 2018-12-19T16:04:28Z (GMT). No. of bitstreams: 1 Lúcia Adriana dos Santos Gruginskie_.pdf: 9867712 bytes, checksum: bfd43e9da7c12799c9184df87464cff6 (MD5) Previous issue date: 2018-09-03 / Nenhuma / Os órgãos do Poder Judiciário, como exemplo de instituições públicas, são fundamentais para o desenvolvimento econômico e social. Porém, os principais problemas do judiciário brasileiro, apontados pelo Ministério da Justiça, são o alto número de processos em estoque, a falta de acesso à justiça e a morosidade, considerada como o principal aspecto da crise do judiciário. Neste sentido, esta tese propõe estruturar e comparar modelos de previsão de tempo de atravessamento processual de ações judiciais civis na justiça federal de 2a Instância, para servir de informação para as partes processuais e administração. O estudo foi realizado no Tribunal Regional Federal da 4a Região, com dados de processos cíveis baixados em 2017. Para tanto, foram comparados quatro modelos para o tempo de atravessamento. O primeiro modelo foi ajustado através de redes neurais para regressão com o uso do algoritmo retroalimentação, o segundo utilizou máquina de vetor suporte para regressão, através da biblioteca Libsvm. O desempenho destes dois modelos, calculados pela medida RMSE, foi comparado ao desempenho da aplicação de análise de sobrevivência, o terceiro modelo, considerada técnica habitual para análise de estudos quantitativos de tempo. A variável resposta usada foi o tempo em dias entre a autuação do processo no 2o Grau e a baixa, escolhida entre indicadores usados em trabalhos acadêmicos e por órgãos judiciais do Brasil e da Europa. O quarto modelo foi ajustado utilizando máquina de vetor suporte para classificação, através da biblioteca LIBSVM. A variável resposta para o ajuste deste modelo foi transformada em ordinal por meio da estratificação em faixas de tempo, o que permitiu o cálculo da medida acurácia para aferir o desempenho. As covariáveis usadas para os ajustes foram categóricas e estavam disponíveis no banco de dados do TRF. Após os ajustes, foram aplicadas regras de associação às faixas de tempo com o objetivo de encontrar as características dos processos mais lentos e morosos. Também foi analisada a viabilidade de estabelecer parâmetros de tempos razoáveis. Para utilização em previsões, sugere-se o modelo de classificação de faixa de tempo e para o estabelecimento de padrões, ou o modelo ajustado por redes neurais ou por máquina de vetor suporte. Entre as sugestões para trabalhos futuros estão a construção de uma tábua de baixa de processos, análoga às tábuas atuariais de mortalidade, e o estabelecimento de padrões para considerar processos como morosos. / The Courts as an example of public institutions, are fundamental for the economic and social development. However, the main problems of the Brazilian judiciary, pointed out by the Ministry of Justice, are the high number of cases in stock, lack of access to justice and slowness, considered as the main aspect of the crisis of the judiciary. In this sense, this thesis proposes to structure and compare models of lead time of civil lawsuits in the federal court of second instance, to serve as information for the parties of lawsuit and the administration. The study was conducted at the Tribunal Regional Federal da 4a Região, with data from civil lawsuits termined in 2017. Four models for lead time lawsuits were compared. The first was fitted by neural network for regression with the backpropagation algorithm; the second model was fitted by using support vector machine for regression with the Libsvm library. The performance of these two models, calculated by the RMSE measurement, was compared to the performance of the survival analysis model, considered as the usual technique for the analysis of quantitative time studies. The dependent variable used was the time in days between the arraignment date and the case disposition date, chosen among indicators used in academic studies and by judicial Courts of Brazil and Europe. The fourth model was fitted using vector support machine for classification, using the Libsvm algorithm. The dependent variable was transformed into ordinal by means of the stratification in time bands, which allowed the calculation of the measurement as accuracy and precision. The independent variables were categorical and were available in the TRF database. After, rules of association were applied to the time bands in order to find the characteristics of the most frequent band and the more time consuming lawsuits. The feasibility of establishing parameters of reasonable times was also analyzed. The time-band classification model is suggest to use in forecasts and it can use the model adjusted by neural networks or by the support vector machine in use to establish standards of time. Among the suggestions for future work are the construction of a life tables of lawsuits, analogous to the actuarial tables, and the establishment of standards to consider reasonable lead time.

Page generated in 0.085 seconds