• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 7
  • 6
  • 6
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 70
  • 70
  • 20
  • 20
  • 15
  • 14
  • 12
  • 10
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Incremental and developmental perspectives for general-purpose learning systems

Martínez Plumed, Fernando 07 July 2016 (has links)
[EN] The stupefying success of Artificial Intelligence (AI) for specific problems, from recommender systems to self-driving cars, has not yet been matched with a similar progress in general AI systems, coping with a variety of problems. This dissertation deals with the long-standing problem of creating more general AI systems, through the analysis of their development and the evaluation of their cognitive abilities. Firstly, this thesis contributes with a general-purpose learning system that meets several desirable characteristics in terms of expressiveness, comprehensibility and versatility. The system works with approaches that are inherently general: inductive programming and reinforcement learning. The system does not rely on a fixed library of learning operators, but can be endowed with new ones, so being able to operate in a wide variety of contexts. This flexibility, jointly with its declarative character, makes it possible to use the system as an instrument for better understanding the role (and difficulty) of the constructs that each task requires. The learning process is also overhauled with a new developmental and lifelong approach for knowledge acquisition, consolidation and forgetting, which is necessary when bounded resources (memory and time) are considered. Secondly, this thesis analyses whether the use of intelligence tests for AI evaluation is a much better alternative to most task-oriented evaluation approaches in AI. Accordingly, we make a review of what has been done when AI systems have been confronted against tasks taken from intelligence tests. In this regard, we scrutinise what intelligence tests measure in machines, whether they are useful to evaluate AI systems, whether they are really challenging problems, and whether they are useful to understand (human) intelligence. Finally, the analysis of the concepts of development and incremental learning in AI systems is done at the conceptual level but also through several of these intelligence tests, providing further insight for the understanding and construction of general-purpose developing AI systems. / [ES] El éxito abrumador de la Inteligencia Artificial (IA) en la resolución de tareas específicas (desde sistemas de recomendación hasta vehículos de conducción autónoma) no ha sido aún igualado con un avance similar en sistemas de IA de carácter más general enfocados en la resolución de una mayor variedad de tareas. Esta tesis aborda la creación de sistemas de IA de propósito general así como el análisis y evaluación tanto de su desarrollo como de sus capacidades cognitivas. En primer lugar, esta tesis contribuye con un sistema de aprendizaje de propósito general que reúne distintas ventajas como expresividad, comprensibilidad y versatilidad. El sistema está basado en aproximaciones de carácter inherentemente general: programación inductiva y aprendizaje por refuerzo. Además, dicho sistema se basa en una biblioteca dinámica de operadores de aprendizaje por lo que es capaz de operar en una amplia variedad de contextos. Esta flexibilidad, junto con su carácter declarativo, hace que sea posible utilizar el sistema de forma instrumental con el objetivo de facilitar la comprensión de las distintas construcciones que cada tarea requiere para ser resuelta. Por último, el proceso de aprendizaje también se revisa por medio de un enfoque evolutivo e incremental de adquisición, consolidación y olvido de conocimiento, necesario cuando se trabaja con recursos limitados (memoria y tiempo). En segundo lugar, esta tesis analiza el uso de tests de inteligencia humana para la evaluación de sistemas de IA y plantea si su uso puede constituir una alternativa válida a los enfoques actuales de evaluación de IA (más orientados a tareas). Para ello se realiza una exhaustiva revisión bibliográfica de aquellos sistemas de IA que han sido utilizados para la resolución de este tipo de problemas. Esto ha permitido analizar qué miden realmente los tests de inteligencia en los sistemas de IA, si son significativos para su evaluación, si realmente constituyen problemas complejos y, por último, si son útiles para entender la inteligencia (humana). Finalmente se analizan los conceptos de desarrollo cognitivo y aprendizaje incremental en sistemas de IA no solo a nivel conceptual, sino también por medio de estos problemas mejorando por tanto la comprensión y construcción de sistemas de propósito general evolutivos. / [CA] L'èxit aclaparant de la Intel·ligència Artificial (IA) en la resolució de tasques específiques (des de sistemes de recomanació fins a vehicles de conducció autònoma) no ha sigut encara igualat amb un avanç similar en sistemes de IA de caràcter més general enfocats en la resolució d'una major varietat de tasques. Aquesta tesi aborda la creació de sistemes de IA de propòsit general així com l'anàlisi i avaluació tant del seu desenvolupament com de les seues capacitats cognitives. En primer lloc, aquesta tesi contribueix amb un sistema d'aprenentatge de propòsit general que reuneix diferents avantatges com ara expressivitat, comprensibilitat i versatilitat. El sistema està basat en aproximacions de caràcter inherentment general: programació inductiva i aprenentatge per reforç. A més, el sistema utilitza una biblioteca dinàmica d'operadors d'aprenentatge pel que és capaç d'operar en una àmplia varietat de contextos. Aquesta flexibilitat, juntament amb el seu caràcter declaratiu, fa que siga possible utilitzar el sistema de forma instrumental amb l'objectiu de facilitar la comprensió de les diferents construccions que cada tasca requereix per a ser resolta. Finalment, el procés d'aprenentatge també és revisat mitjançant un enfocament evolutiu i incremental d'adquisició, consolidació i oblit de coneixement, necessari quan es treballa amb recursos limitats (memòria i temps). En segon lloc, aquesta tesi analitza l'ús de tests d'intel·ligència humana per a l'avaluació de sistemes de IA i planteja si el seu ús pot constituir una alternativa vàlida als enfocaments actuals d'avaluació de IA (més orientats a tasques). Amb aquesta finalitat, es realitza una exhaustiva revisió bibliogràfica d'aquells sistemes de IA que han sigut utilitzats per a la resolució d'aquest tipus de problemes. Açò ha permès analitzar què mesuren realment els tests d'intel·ligència en els sistemes de IA, si són significatius per a la seua avaluació, si realment constitueixen problemes complexos i, finalment, si són útils per a entendre la intel·ligència (humana). Finalment s'analitzen els conceptes de desenvolupament cognitiu i aprenentatge incremental en sistemes de IA no solament a nivell conceptual, sinó també per mitjà d'aquests problemes millorant per tant la comprensió i construcció de sistemes de propòsit general evolutius. / Martínez Plumed, F. (2016). Incremental and developmental perspectives for general-purpose learning systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/67269
52

Development of a pipeline to allow continuous development of software onto hardware : Implementation on a Raspberry Pi to simulate a physical pedal using the Hardware In the Loop method / Utveckling av en pipeline för att ge upphov till kontinuerligt utvecklande av mjukvara på hårdvara : Implementation på en Raspberry Pi för att simulera en fysisk pedal genom användandet av Hardware In the Loop-metoden

Ryd, Jonatan, Persson, Jeffrey January 2021 (has links)
Saab want to examine Hardware In the Loop method as a concept, and how an infrastructure of Hardware In the Loop would look like. Hardware In the Loop is based upon continuously testing hardware, which is simulated. The software Saab wants to use for the Hardware In the Loop method is Jenkins, which is a Continuous Integration, and Continuous Delivery tool. To simulate the hardware, they want to examine the use of an Application Programming Interface between a Raspberry Pi, and the programming language Robot Framework. The reason Saab wants this examined, is because they believe that this method can improve the rate of testing, the quality of the tests, and thereby the quality of their products.The theory behind Hardware In the Loop, Continuous Integration, and Continuous Delivery will be explained in this thesis. The Hardware In the Loop method was implemented upon the Continuous Integration and Continuous Delivery tool Jenkins. An Application Programming Interface between the General Purpose Input/Output pins on a Raspberry Pi and Robot Framework, was developed. With these implementations done, the Hardware In the Loop method was successfully integrated, where a Raspberry Pi was used to simulate the hardware. / Saab vill undersöka metoden Hardware In the Loop som ett koncept, dessutom hur en infrastruktur av Hardware In the Loop skulle se ut. Hardware In the Loop baseras på att kontinuerligt testa hårdvara som är simulerad. Mjukvaran Saab vill använda sig av för Hardware In the Loop metoden är Jenkins, vilket är ett Continuous Integration och Continuous Delivery verktyg. För attsimulera hårdvaran vill Saab undersöka användningen av ett Application Programming Interface mellan en Raspberry Pi och programmeringsspråket Robot Framework. Anledning till att Saab vill undersöka allt det här, är för att de tror att det kan förbättra frekvensen av testning och kvaliteten av testning, vilket skulle leda till en förbättring av deras produkter. Teorin bakom Hardware In the Loop, Continuous Integration och Continuous Delivery kommer att förklaras i den här rapporten. Hardware In the Loop metoden blev implementerad med Continuous Integration och Continuous Delivery verktyget Jenkins. Ett Application Programming Interface mellan General Purpose Input/output pinnarna på en Raspberry Pi och Robot Framework blev utvecklat. Med de här implementationerna utförda, så blev Hardware Inthe Loop metoden slutligen integrerat, där Raspberry Pis användes för att simulera hårdvaran.
53

On continuous maximum flow image segmentation algorithm / Segmentation d'images par l'algorithme des flot maximum continu

Marak, Laszlo 28 March 2012 (has links)
Ces dernières années avec les progrès matériels, les dimensions et le contenu des images acquises se sont complexifiés de manière notable. Egalement, le différentiel de performance entre les architectures classiques mono-processeur et parallèles est passé résolument en faveur de ces dernières. Pourtant, les manières de programmer sont restées largement les mêmes, instituant un manque criant de performance même sur ces architectures. Dans cette thèse, nous explorons en détails un algorithme particulier, les flots maximaux continus. Nous explicitons pourquoi cet algorithme est important et utile, et nous proposons plusieurs implémentations sur diverses architectures, du mono-processeur à l'architecture SMP et NUMA, ainsi que sur les architectures massivement parallèles des GPGPU. Nous explorons aussi des applications et nous évaluons ses performances sur des images de grande taille en science des matériaux et en biologie à l'échelle nano / In recent years, with the advance of computing equipment and image acquisition techniques, the sizes, dimensions and content of acquired images have increased considerably. Unfortunately as time passes there is a steadily increasing gap between the classical and parallel programming paradigms and their actual performance on modern computer hardware. In this thesis we consider in depth one particular algorithm, the continuous maximum flow computation. We review in detail why this algorithm is useful and interesting, and we propose efficient and portable implementations on various architectures. We also examine how it performs in the terms of segmentation quality on some recent problems of materials science and nano-scale biology
54

Développement d’algorithmes d’imagerie et de reconstruction sur architectures à unités de traitements parallèles pour des applications en contrôle non destructif / Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testing

Pedron, Antoine 28 May 2013 (has links)
La problématique de cette thèse se place à l’interface entre le domaine scientifique du contrôle non destructif par ultrasons (CND US) et l’adéquation algorithme architecture. Le CND US comprend un ensemble de techniques utilisées pour examiner un matériau, qu’il soit en production ou maintenance. Afin de détecter d’éventuels défauts, de les positionner et les dimensionner, des méthodes d’imagerie et de reconstruction ont été développées au CEA-LIST, dans la plateforme logicielle CIVA.L’évolution du matériel d’acquisition entraine une augmentation des volumes de données et par conséquent nécessite toujours plus de puissance de calcul pour parvenir à des reconstructions en temps interactif. L’évolution multicoeurs des processeurs généralistes (GPP), ainsi que l’arrivée de nouvelles architectures comme les GPU rendent maintenant possible l’accélération de ces algorithmes.Le but de cette thèse est d’évaluer les possibilités d’accélération de deux algorithmes de reconstruction sur ces architectures. Ces deux algorithmes diffèrent dans leurs possibilités de parallélisation. Pour un premier, la parallélisation sur GPP est relativement immédiate, contrairement à celle sur GPU qui nécessite une utilisation intensive des instructions atomiques. Quant au second, le parallélisme est plus simple à exprimer, mais l’ordonnancement des nids de boucles sur GPP, ainsi que l’ordonnancement des threads et une bonne utilisation de la mémoire partagée des GPU sont nécessaires pour obtenir un fonctionnement efficace. Pour ce faire, OpenMP, CUDA et OpenCL ont été utilisés et comparés. L’intégration de ces prototypes dans la plateforme CIVA a mis en évidence un ensemble de problématiques liées à la maintenance et à la pérennisation de codes sur le long terme. / This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterize possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform.Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purprose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms.The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed.
55

The international financial reporting standard for small and medium-sized entities : the need and form of a third-tier financial reporting standard in Namibia

Klink, Petra 27 May 2016 (has links)
The development of the International Financial Reporting Standard for Small and Medium-sized Entities (IFRS for SMEs) was based on the demand for a more simplified financial reporting standard, compared to International Financial Reporting Standards (IFRS). Despite simplifications, the requirements of the IFRS for SMEs are still regarded complex and costly to apply, especially for micro entities in developing countries such as Namibia. Consequently, there is a need to further simplify financial reporting requirements for micro entities in the form of a third-tier financial reporting standard. A third-tier standard can take the form of either a separately developed standard or a simplification of existing standard(s). There are more advantages to the development of a standard based on existing standard(s), taking into account the Namibian financial reporting environment. It is therefore recommended that Namibia develop a third-tier standard based on the IFRS for SMEs. / Financial Accounting / M. Phil. (Accounting Science)
56

Dynamický částicový systém jako účinný nástroj pro statistické vzorkování / A dynamical particle system as a driver for optimal statistical sampling

Mašek, Jan Unknown Date (has links)
The presented doctoral thesis aims at development a new efficient tool for optimization of uniformity of point samples. One of use-cases of these point sets is the usage as optimized sets of integration points in statistical analyses of computer models using Monte Carlo type integration. It is well known that the pursuit of uniformly distributed sets of integration points is the only possible way of decreasing the error of estimation of an integral over an unknown function. The tasks of the work concern a survey of currently used criteria for evaluation and/or optimization of uniformity of point sets. A critical evaluation of their properties is presented, leading to suggestions towards improvements in spatial and statistical uniformity of resulting samples. A refined variant of the general formulation of the phi optimization criterion has been derived by incorporating the periodically repeated design domain along with a scale-independent behavior of the criterion. Based on a notion of a physical analogy between a set of sampling points and a dynamical system of mutually repelling particles, a hyper-dimensional N-body system has been selected to be the driver of the developed optimization tool. Because the simulation of such a dynamical system is known to be a computationally intensive task, an efficient solution using the massively parallel GPGPU platform Nvidia CUDA has been developed. An intensive study of properties of this complex architecture turned out as necessary to fully exploit the possible solution speedup.
57

SHAP-Secure Hardware Agent Platform

Zabel, Martin, Preußer, Thomas B., Reichel, Peter, Spallek, Rainer G. 11 June 2007 (has links)
This paper presents a novel implementation of an embedded Java microarchitecture for secure, realtime, and multi-threaded applications. Together with the support of modern features of object-oriented languages, such as exception handling, automatic garbage collection and interface types, a general-purpose platform is established which also fits for the agent concept. Especially, considering real-time issues, new techniques have been implemented in our Java microarchitecture, such as an integrated stack and thread management for fast context switching, concurrent garbage collection for real-time threads and autonomous control flows through preemptive round-robin scheduling.
58

Reliable General Purpose Sentiment Analysis of the Public Twitter Stream

Haldenwang, Nils 27 September 2017 (has links)
General purpose Twitter sentiment analysis is a novel field that is closely related to traditional Twitter sentiment analysis but slightly differs in some key aspects. The main difference lies in the fact that the novel approach considers the unfiltered public Twitter stream while most of the previous approaches often applied various filtering steps which are not feasible for many applications. Another goal is to yield more reliable results by only classifying a tweet as positive or negative if it distinctly consists of the respective sentiment and mark the remaining messages as uncertain. Traditional approaches are often not that strict. Within the course of this thesis it could be verified that the novel approach differs significantly from the traditional approach. Moreover, the experimental results indicated that the archetypical approaches could be transferred to the new domain but the related domain data is consistently sub par when compared to high quality in-domain data. Finally, the viability of the best classification algorithm could be qualitatively verified in a real-world setting that was also developed within the course of this thesis.
59

Akcelerace operací nad řídkými maticemi v nelineární metodě nejmenších čtverců / Accelerated Sparse Matrix Operations in Nonlinear Least Squares Solvers

Polok, Lukáš January 2017 (has links)
Tato práce se zaměřuje na datové struktury pro reprezentaci řídkých blokových matic a s nimi spojených výpočetních algoritmů, jež jsem navrhl. Řídké blokové matice se vyskytují při řešení mnoha dílčích problémů jako například při řešení metody nejmenších čtverců. Nelineární metoda nejmenších čtverců (NLS) je často aplikována v robotice pro řešení problému lokalizace robota (SLAM) nebo v příbuzných úlohách 3D rekonstrukce v počítačovém vidění (BA), (SfM). Problémy konečných elementů (FEM) a parciálních diferenciálních rovnic (PDE) v oboru fyzikálních simulací můžou také mít blokovou strukturu. Většina existujících implementací řídké lineární algebry používají řídké matice s granularitou jednotlivých elementů a jen několik málo podporuje řídké blokové matice. To může být způsobeno složitostí blokových formátů, jež snižuje rychlost výpočtů, pokud bloky nejsou dost velké. Některé ze specializovaných NLS optimalizátorů v robotice a počítačovém vidění používají blokové matice jako interní reprezentaci, aby snížily cenu sestavování řídkých matic, ale nakonec tuto reprezentaci převedou na elementovou řídkou matici pro implementaci k řešení systémů rovnic. Existující implementace pro řídké blokové matice se většinou soustředí na jedinou operaci, často násobení matice vektorem. Řešení navržené v této disertaci pokrývá širší spektrum funkcí: implementovány jsou funkce pro efektivní sestavení řídké blokové matice, násobení matice vektorem nebo jinou maticí a nechybí ani řešení trojúhelníkových systémů nebo Choleského faktorizace. Tyto funkce mohou být snadno použity ke řešení systémů lineárních rovnic pomocí analytických nebo iterativních metod nebo k výpočtu vlastních čísel. Jsou zde popsány rychlé algoritmy pro hlavní procesor (CPU) i pro grafické akcelerátory (GPU). Navrhované algoritmy jsou integrovány v knihovně SLAM++ , jež řeší problém nelineárních nejmenších čtverců se zaměřením na problémy v robotice a počítačovém vidění. Je provedeno vyhodnocení na standardních datasetech kde navrhované metody dosahují výrazně lepších výsledků než dosavadní metody popsané v literatuře -- a to bez kompromisů v přesnosti či obecnosti řešení.
60

Perceived Affordance and Socio-Technical Transition: Blockchain for the Swedish Public Sector / Uppfattad görlighet och socio-teknisk övergång: blockkedjor för svensk offentlig sektor

JONSSON, JOHAN R. January 2018 (has links)
The Swedish public sector is under constant pressure to improve processes and services through further digitalization. Blockchain is a novelty technology which shows promise of enabling functionalities which are desired within the sector. However, as the technology is still in its infancy, the practical value it could offer the sector remains unproven. In this master thesis, the socio-technical transition of the public sector for adopting blockchain is analyzed using the multi-level perspective framework. The sector is operationalized as an incumbent socio-technical regime and blockchain as a collection of niche innovations. Affordance theory and the multi-level perspective are combined to analyze how the perception of blockchain affects the potential transition pathways. The primary empirical data is gathered through a series of interviews with key individuals from both the Swedish public sector and blockchain community, as well as from attending blockchain events. Secondary data is gathered through the review of various types of literature regarding the topic. The findings of the thesis show that the practical value and functionalities that blockchain offers and that match the needs of the sector are verification, authentication, traceability, automating simple logical functions, and digitizing unique value. The identified conceptual solutions deemed suitable today are: blockchain for identity management, blockchain for data verification, blockchains for property registers of, e.g., vehicles and real estate, and external industry blockchains for improved traceability of, e.g., supply chains and sales records. The thesis also derives recommendations for the public sector indicating that, e.g., active education, revision of regulation, and international cooperation would further a potential transition towards blockchain. It also finds that perceived affordances of a technology in its early stages affect the transition pathways; barriers of entry, number of potential adopting application sectors, the level of coordination, and the available resources for development are influenced by the perceptions. / Svensk offentlig sektor utsätts konstant för påtryckningar gällande fortsatt digitalisering av processer och tjänster. Blockkedjan är en ny teknologi som påvisar potential att kunna tillgodose funktioner önskvärda inom den offentliga sektorn. Dock är teknologin fortfarande i ett begynnande stadie och dess praktiska värde är ännu obevisat. I detta examensarbete analyseras offentliga sektorns potentiella socio-tekniska övergång till att ta blockkedjor i bruk med hjälp av multinivåperspektiv-ramverket. Sektorn operationaliseras som en befintlig socio-teknisk regim och blockkedjor som en samling av nischinnovationer. Görlighetsteori och multinivåperspektivet kombineras för att analysera hur uppfattningen av blockkedjor påverkar de potentiella övergångsvägarna. Primära empiriska data samlas in genom en serie av intervjuer med nyckelindivider från både svensk offentlig sektor och blockkedjegemenskapen, samt även från deltagande i blockkedjearrangemang. Sekundära data samlas in genom en studie av diverse typer av litteratur gällande ämnet. Examensarbetets resultat påvisar att det praktiska värdet och funktionaliteterna som blockkedjor tillgodoser och som passar med offentliga sektorns behov är verifikation, autentisering, spårbarhet, automatisering av simpla logiska funktioner, samt digitalisering av unika värden. De identifierade konceptuella lösningarna som bedöms lämpliga i dagsläget är: blockkedja för identitetshantering, blockkedja för dataverifikation, blockkedjor för egendomsregister, t.ex. för fordon och bostäder, samt externa industriblockkedjor för förbättrad spårning, t.ex. av försörjningskedjor och försäljning. Examensarbetet härleder även rekommendationer till offentliga sektorn, innefattande exempelvis aktiv utbildning, revision av reglementen, samt internationellt samarbete. Resultaten påvisar även att den uppfattade görligheten av en teknologi i ett tidigt stadie av innovation påverkar övergångsvägarna in i en regim. Detta genom att uppfattningarna influerar inträdesbarriärer, antalet potentiella applikationssektorer, koordinationsnivån, samt mängden tillgängliga resurser.

Page generated in 0.0636 seconds