• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 10
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 59
  • 14
  • 13
  • 12
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Genome Evolution Model (GEM): Design and Application

McSweeny, Andrew 29 December 2010 (has links)
No description available.
52

Abordagens bio-inspiradas aplicadas ao estudo da cognição : um encontro entre biologia, psicologia e filosofia /

Junqueira, Luís Henrique Féres. January 2006 (has links)
Orientador: Maria Candida Soares Del-Masso / Banca: Alfredo Pereira Júnior / Banca: Gustavo Maia Souza / Resumo: É antiga a preocupação do ser humano com as questões relacionadas ao conhecimento, incluindo as discussões sobre a sua origem, seu aprendizado, sobre a nossa capacidade de utiliza-lo e sobre as características específicas da cognição humana. Essa preocupação remonta aos antigos filósofos gregos (2.500 A.C.), desenvolvendo-se posteriormente a partir da abordagem da Epistemologia, originária da Filosofia ocidental, e mais recentemente a partir do Funcionalismo, pertencente aos estudos em Filosofia da Mente e em Ciência Cognitiva. Essa última abordagem, em particular, contribuiu para o surgimento de programas de pesquisas que procuram entender o funcionamento da mente humana com a ajuda do computador. A Ciência Cognitiva possui fortes ligações com as pesquisas em Inteligência Artificial, e ambas vem se desenvolvendo desde a década de 1950. Mais recentemente, a partir da década de 1980, uma nova área de estudos surgiu, formada por pesquisas em Vida Artificial, que trabalha com a possibilidade de síntese de entidades vivas, por meios artificiais, e desde então vem chamando a atenção de pesquisadores interessados no estudo da cognição. Enquanto a Ciência Cognitiva tem ligações estreitas com a Filosofia e a Psicologia, a Vida Artificial tem uma forte inspiração na Biologia. Neste trabalho, procuramos investigar o encontro entre essas disciplinas, e seus programas de pesquisas, considerando as possibilidades de contribuição para o estudo da cognição humana, a partir de uma abordagem conjunta entre essas áreas. / Abstract: It is na old thing the human concern about knowledge related questions, including discussions on its origin, its learning, about our capacity to utilize it and about specific characteristics of the human cognition. That concern remounts to the ancient greek philosophers (2.500 B.C.), developing itself later from the approach of Epistemology, originated in the occidental Philosophy, and more recently from the Functionalism, that makes part of the studies of Philosophy of Mind and of Cognitive Science. This last approach, particularly, contributed to appearing of research programs that try to understand the function of the human mind with the help of the computer. The Cognitive Science has strong relation with research in Artificial Intelligence, and both areas have been growing since the 1950 decade. More recently, from the decade 1980 on, a new area of atudies appeared, formed by researches in Artificial Life, that work with the possibility of synthesis of alive entities, by artificial means, and sice then has been attracting the attention of researchers interested in the study of Cognition. While Cognitive Science has strict relations with the Philosophy and the Psychology, Artifiical life has a strong inspiration in the Biology. In this work, we tried to investigate the meeting between these subjects, and their research programs, considering the possibilities of the contribution to the study of the human cognition, by means of an integrated approach of these areas. / Mestre
53

Pareto multi-objective evolution of legged embodied organisms

Teo, Jason T. W., Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2003 (has links)
The automatic synthesis of embodied creatures through artificial evolution has become a key area of research in robotics, artificial life and the cognitive sciences. However, the research has mainly focused on genetic encodings and fitness functions. Considerably less has been said about the role of controllers and how they affect the evolution of morphologies and behaviors in artificial creatures. Furthermore, the evolutionary algorithms used to evolve the controllers and morphologies are pre-dominantly based on a single objective or a weighted combination of multiple objectives, and a large majority of the behaviors evolved are for wheeled or abstract artifacts. In this thesis, we present a systematic study of evolving artificial neural network (ANN) controllers for the legged locomotion of embodied organisms. A virtual but physically accurate world is used to simulate the evolution of locomotion behavior in a quadruped creature. An algorithm using a self-adaptive Pareto multi-objective evolutionary optimization approach is developed. The experiments are designed to address five research aims investigating: (1) the search space characteristics associated with four classes of ANNs with different connectivity types, (2) the effect of selection pressure from a self-adaptive Pareto approach on the nature of the locomotion behavior and capacity (VC-dimension) of the ANN controller generated, (3) the effciency of the proposed approach against more conventional methods of evolutionary optimization in terms of computational cost and quality of solutions, (4) a multi-objective approach towards the comparison of evolved creature complexities, (5) the impact of relaxing certain morphological constraints on evolving locomotion controllers. The results showed that: (1) the search space is highly heterogeneous with both rugged and smooth landscape regions, (2) pure reactive controllers not requiring any hidden layer transformations were able to produce sufficiently good legged locomotion, (3) the proposed approach yielded competitive locomotion controllers while requiring significantly less computational cost, (4) multi-objectivity provided a practical and mathematically-founded methodology for comparing the complexities of evolved creatures, (5) co-evolution of morphology and mind produced significantly different creature designs that were able to generate similarly good locomotion behaviors. These findings attest that a Pareto multi-objective paradigm can spawn highly beneficial robotics and virtual reality applications.
54

Noise, Delays, and Resonance in a Neural Network

Quan, Austin 01 May 2011 (has links)
A stochastic-delay differential equation (SDDE) model of a small neural network with recurrent inhibition is presented and analyzed. The model exhibits unexpected transient behavior: oscillations that occur at the boundary of the basins of attraction when the system is bistable. These are known as delay-induced transitory oscillations (DITOs). This behavior is analyzed in the context of stochastic resonance, an unintuitive, though widely researched phenomenon in physical bistable systems where noise can play in constructive role in strengthening an input signal. A method for modeling the dynamics using a probabilistic three-state model is proposed, and supported with numerical evidence. The potential implications of this dynamical phenomenon to nocturnal frontal lobe epilepsy (NFLE) are also discussed.
55

Um algoritmo de vida artificial para agrupamento de dados variantes no tempo

Santos, Diego Gadens dos 14 September 2012 (has links)
Made available in DSpace on 2016-03-15T19:37:44Z (GMT). No. of bitstreams: 1 Diego Gadens dos Santos.pdf: 2663525 bytes, checksum: 46be86494cd52896593a08e979b2a0ce (MD5) Previous issue date: 2012-09-14 / Fundo Mackenzie de Pesquisa / Current technologies have made it possible to generate and store data in high volumes. To process and collect information in large databases is not always as easy as creating them. Therefore, this gap has stimulated the search for efficient techniques capable of extracting useful and non-trivial knowledge, which are intrinsic to these large data sets. The goal of this work is to propose a bioinspired algorithm, based on the Boids artificial life model, which will be used to group data in dynamic environments, i.e. in databases updated over time. The Bo-ids algorithm was originally created to illustrate the simulation of the coordinated movement observed in a flock of birds and other animals. Thus, to use this algorithm for data clustering, some modifications must be applied. These changes will be implemented in the classical rules of cohesion, separation and alignment of the Boids model in order to consider the distance (similarity/dissimilarity) among objects. Thus, it creates objects that stand and move around the space, representing the natural groups within the data, and it is expected that similar ob-jects tend to form dynamic clusters (groups) of Boids in the environment, while dissimilar objects tend to keep a larger distance between them. The results presented attest the robust-ness of the algorithm for clustering time-varying data under the light of different evaluation measures and in various databases from the literature. / A capacidade de geração e armazenamento de dados proporcionada pelas tecnologias atuais levou ao surgimento de bases de dados com uma grande variedade de tipos e tamanhos. Extra-ir conhecimentos não triviais e úteis a partir de grandes bases de dados, entretanto, é uma tare-fa muito mais difícil do que a criação das mesmas. Esta lacuna tem estimulado a busca por técnicas eficientes de extração de conhecimentos intrínsecos a estes grandes conjuntos de da-dos, capazes de permitir tomadas estratégicas de decisão. Dentre as muitas tarefas da extração de conhecimentos a partir de dados, tem-se o agrupamento, que consiste na segmentação da base em grupos cujos objetos são mais parecidos entre si do que a objetos pertencentes a ou-tros grupos. Apesar de a área ser bastante ativa, pouco tem sido feito no sentido de desenvol-ver e investigar algoritmos de agrupamento para dados variantes no tempo, por exemplo, tran-sações financeiras, dados climáticos, informações e mensagens postadas em redes sociais e muitos outros. Tendo em vista a relevância prática desse tipo de análise e o crescente interesse pelos algoritmos inspirados na biologia, este trabalho tem como objetivo propor um algoritmo bioinspirado, baseado no modelo de vida artificial de Boids, para realizar o agrupamento de dados variantes no tempo. O algoritmo de Boids foi inicialmente criado para demonstrar ape-nas a simulação da movimentação coordenada observada em uma revoada de pássaros. A fim de utilizar este algoritmo para a tarefa de agrupamento de dados, algumas modificações tive-ram de ser propostas nas regras clássicas de coesão, separação e alinhamento dos Boids. Desta forma, foram criados objetos que se posicionam e se movimentam no espaço, de maneira a representar os grupos naturais existentes nos dados. A característica dinâmica intrínseca dos Boids tornou o algoritmo proposto, denominado dcBoids (dynamic clustering Boids), um can-didato natural para a resolução de problemas de agrupamento de dados variantes no tempo. Os resultados obtidos atestaram a robustez do método em seu contexto de aplicação, sob a pers-pectiva de diferentes medidas de avaliação de desempenho e quando aplicado a várias bases de dados da literatura com dinâmicas inseridas artificialmente.
56

CLASSIFICAÇÃO DE NÓDULOS PULMONARES UTILIZANDO VIDAS ARTIFICIAIS, MVS E MEDIDAS DIRECIONAIS DE TEXTURA / CLASSIFICATION OF PULMONARY NODULES USING ARTIFICIAL LIFE, MVS AND TEXTURE DIRECTIONAL MEASURES

Froz, Bruno Rodrigues 02 February 2015 (has links)
Made available in DSpace on 2016-08-17T14:52:36Z (GMT). No. of bitstreams: 1 dissertacao Bruno Rodrigues Froz.pdf: 1583465 bytes, checksum: f53ff1f85d91788fc7d52925b16f6794 (MD5) Previous issue date: 2015-02-02 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / The lung cancer is known for presenting the highest mortality rate and one of the lowest survival rate after diagnosis, which is mainly caused by the late detection and treatment. With the goal of assist the lung cancer specialists, computed aided diagnosis systems are developed to automate the detection and diagnosis of this disease. This work proposes a methodology to classify, with computed tomography images, lung nodules candidates and non-nodules candidates. The Lung Image Database Consortium (LIDC) image database is used to create an image database with nodules candidates and an image database with non-nodule candidates. Three techniques are utilized to extract texture measurements. The first one is the artificial life algorithm Artificial Crawlers. The second one is the use of Rose Diagram to extract directional measurements. The third and last one is an hybrid model to join the Artificial Crawlers and Rose Diagram texture measurements. In the classification, que Support Vector Machine classifier is used, with its radial basis kernel. The archived results are very promising. With 833 LIDC exams, divided in 60% for train and 40% for test, we reached na accuracy mean of 94,30%, sensitivity mean of 91,86%, specificity mean of 94,78%, variance coefficient of accuracy of 1,61% and ROC curves mean área of 0,922. / O câncer de pulmão é conhecido por apresentar a maior taxa de mortalidade e uma das menores taxas de sobrevida após o diagnóstico, o que é causado principalmente pela detecção e tratamento tardios. Para o auxílio dos especialistas em câncer pulmonar, são desenvolvidos sistemas de diagnósticos auxiliados por computador com o objetivo de automatizar a detecção e diagnóstico dessa doença. Este trabalho propõe uma metodologia para a classificação, através de imagens de tomografias computadorizadas, de candidatos a nódulos pulmonares e candidatos a não-nódulos. O banco de imagens Lung Image Database Consortium (LIDC) é utilizado para a criação de uma base de imagens de candidatos a nódulos e uma base de imagens de candidatos a não-nódulos. Três técnicas são utilizadas para a extração de medidas de textura. A primeira delas é o algoritmo de vidas artificiais Artificial Crawlers. A segunda técnica é a utilização do Rose Diagram para a extração de medidas direcionais. A terceira e última técnica é um modelo híbrido que une as medidas do Artificial Crawlers e do Rose Diagram. Para a classificação é utilizado o classificador Máquina de Vetor de Suporte (MVS), com o kernel de base radial. Os resultados alcançados são muito promissores. Utilizando 833 exames do LIDC divididos em 60% para treino e 40% para teste, alcançou-se uma média de acurácia de 94,30%, média de sensibilidade de 91,86%, média de especificidade de 94,78%, coeficiente de variância da acurácia de 1,61% e área média das curvas ROC de 0,922.
57

INTELLIGENT SOLID WASTE CLASSIFICATION SYSTEM USING DEEP LEARNING

Michel K Mudemfu (13558270) 31 July 2023 (has links)
<p>  </p> <p>The proper classification and disposal of waste are crucial in reducing environmental impacts and promoting sustainability. Several solid waste classification systems have been developed over the years, ranging from manual sorting to mechanical and automated sorting. Manual sorting is the oldest and most commonly used method, but it is time-consuming and labor-intensive. Mechanical sorting is a more efficient and cost-effective method, but it is not always accurate, and it requires constant maintenance. Automated sorting systems use different types of sensors and algorithms to classify waste, making them more accurate and efficient than manual and mechanical sorting systems. In this thesis, we propose the development of an intelligent solid waste detection, classification and tracking system using artificial deep learning techniques. To address the limited samples in the TrashNetV2 dataset and enhance model performance, a data augmentation process was implemented. This process aimed to prevent overfitting and mitigate data scarcity issues while improving the model's robustness. Various augmentation techniques were employed, including random rotation within a range of -20° to 20° to account for different orientations of the recycled materials. A random blur effect of up to 1.5 pixels was used to simulate slight variations in image quality that can arise during image acquisition. Horizontal and vertical flipping of images were applied randomly to accommodate potential variations in the appearance of recycled materials based on their orientation within the image. Additionally, the images were randomly scaled to 416 by 416 pixels, maintaining a consistent image size while increasing the dataset's overall size. Further variability was introduced through random cropping, with a minimum zoom level of 0% and a maximum zoom level of 25%. Lastly, hue variations within a range of -20° to 20° were randomly introduced to replicate lighting condition variations that may occur during image acquisition. These augmentation techniques collectively aimed to improve the dataset's diversity and the model's performance. In this study, YOLOv8, EfficientNet-B0 and VGG16 architectures were evaluated, and stochastic gradient descent (SGD) and Adam were used as the optimizer. Although, SGD provided better test accuracies compared to Adam. </p> <p>Among the three models, YOLOv8 showed the best performance, with the highest average precision mAP of 96.5%. YOLOv8 emerges as the top performer, with ROC values varying from 92.70% (Metal) to 98.40% (Cardboard). Therefore, the YOLOv8 model outperforms both VGG16 and EfficientNet in terms of ROC values and mAP. The findings demonstrate that our novel classifier tracker system made of YOLOv8, and supervision algorithms surpass conventional deep learning methods in terms of precision, resilience, and generalization ability. Our contribution to waste management is in the development and implementation of an intelligent solid waste detection, classification, and tracking system using computer vision and deep learning techniques. By utilizing computer vision and deep learning algorithms, our system can accurately detect, classify, and localize various types of solid waste on a moving conveyor, including cardboard, glass, metal, paper, and plastic. This can significantly improve the efficiency and accuracy of waste sorting processes.</p> <p>This research provides a promising solution for detection, classification, localization, and tracking of solid waste materials in real time system, which can be further integrated into existing waste management systems. Through comprehensive experimentation and analysis, we demonstrate the superiority of our approach over traditional methods, with higher accuracy and faster processing times. Our findings provide a compelling case for the implementation of intelligent solid waste sorting.</p>
58

Universal Computation and Memory by Neural Switching / Universalcomputer und Speicher mittels neuronaler Schaltvorgänge

Schittler Neves, Fabio 28 October 2010 (has links)
No description available.
59

Trustworthy AI: Ensuring Explainability and Acceptance

Davinder Kaur (17508870) 03 January 2024 (has links)
<p dir="ltr">In the dynamic realm of Artificial Intelligence (AI), this study explores the multifaceted landscape of Trustworthy AI with a dedicated focus on achieving both explainability and acceptance. The research addresses the evolving dynamics of AI, emphasizing the essential role of human involvement in shaping its trajectory.</p><p dir="ltr">A primary contribution of this work is the introduction of a novel "Trustworthy Explainability Acceptance Metric", tailored for the evaluation of AI-based systems by field experts. Grounded in a versatile distance acceptance approach, this metric provides a reliable measure of acceptance value. Practical applications of this metric are illustrated, particularly in a critical domain like medical diagnostics. Another significant contribution is the proposal of a trust-based security framework for 5G social networks. This framework enhances security and reliability by incorporating community insights and leveraging trust mechanisms, presenting a valuable advancement in social network security.</p><p dir="ltr">The study also introduces an artificial conscience-control module model, innovating with the concept of "Artificial Feeling." This model is designed to enhance AI system adaptability based on user preferences, ensuring controllability, safety, reliability, and trustworthiness in AI decision-making. This innovation contributes to fostering increased societal acceptance of AI technologies. Additionally, the research conducts a comprehensive survey of foundational requirements for establishing trustworthiness in AI. Emphasizing fairness, accountability, privacy, acceptance, and verification/validation, this survey lays the groundwork for understanding and addressing ethical considerations in AI applications. The study concludes with exploring quantum alternatives, offering fresh perspectives on algorithmic approaches in trustworthy AI systems. This exploration broadens the horizons of AI research, pushing the boundaries of traditional algorithms.</p><p dir="ltr">In summary, this work significantly contributes to the discourse on Trustworthy AI, ensuring both explainability and acceptance in the intricate interplay between humans and AI systems. Through its diverse contributions, the research offers valuable insights and practical frameworks for the responsible and ethical deployment of AI in various applications.</p>

Page generated in 0.0681 seconds