• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 7
  • 6
  • 6
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 70
  • 70
  • 20
  • 20
  • 15
  • 14
  • 12
  • 10
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Capturing Information and Communication Technologies as a General Purpose Technology

Le hir, Boris 20 November 2012 (has links) (PDF)
This thesis aims to study Information and Communication Technologies (ICT) as a General Purpose Technology (GPT) and their role in the labor productivity evolution in the United States and Europe during recent decades. This thesis is organized in three parts corresponding to the fundamental GPT features: the wide possibilities for development, the ubiquity of the technology and the ability to create large technological opportunities. The first part depicts, at first, the innovation in ICT, beginning with a short historical review of ICT inventions followed by the analysis of current data on innovation in this field. In particular, it shows how the US was better than the European countries in inventing ICT until now. Second, this first part makes an inventory of measurement difficulties due to the rate and the nature of the change created by such technologies. The second part of the thesis deals with the ubiquitous nature of ICT. It first describes the ICT diffusion across countries and industries and reviews the economic literature on the direct contribution of ICT on labor productivity growth in the US and Europe. The next chapter studies the factor demand's behaviour in sectors that are either ICT producers or ICT intensive users. The third part focuses on the ICT ability to create opportunities for complementarity innovations. Firstly, it identifies the nature of ICT complementary innovations and the corresponding efforts. It shows, then, that national accounts must be improved in order to take these efforts into account as investments. Secondly, this part shows that, among the eleven European countries studied, the problem is highly concentrated in a few countries that invest less both in ICT and in innovative assets and that these two types of effort are complementary.
42

On continuous maximum flow image segmentation algorithm

Marak, Laszlo 28 March 2012 (has links) (PDF)
In recent years, with the advance of computing equipment and image acquisition techniques, the sizes, dimensions and content of acquired images have increased considerably. Unfortunately as time passes there is a steadily increasing gap between the classical and parallel programming paradigms and their actual performance on modern computer hardware. In this thesis we consider in depth one particular algorithm, the continuous maximum flow computation. We review in detail why this algorithm is useful and interesting, and we propose efficient and portable implementations on various architectures. We also examine how it performs in the terms of segmentation quality on some recent problems of materials science and nano-scale biology
43

Technological breakthroughs and productivity growth

Edquist, Harald January 2006 (has links)
This dissertation consists of four self-contained studies concentrating on the productivity development following major technological breakthroughs. All four studies are concerned with measurement issues of productivity. Three of the papers use a comparative historical perspective and primarily focus on some of the differences and similarities in productivity growth following each technological breakthrough. A fourth paper solely focuses on the ICT revolution and the problems associated with measuring productivity in the Swedish Radio, television and communication equipment (RTC) industry. Paper 1, Technological Breakthroughs and Productivity Growth (with Magnus Henrekson), examines productivity growth following three major technological breakthroughs: the steam power revolution, electrification and the ICT revolution. The distinction between sectors producing and sectors using the new technology is emphasized. A major finding for all breakthroughs is that there is a long lag from the time of the original invention until a substantial increase in the rate of productivity growth can be observed. There is also strong evidence of rapid price decreases for steam engines, electricity, electric motors and ICT products. However, there is no persuasive direct evidence that the steam engine producing industry and electric machinery had particularly high productivity growth rates. For the ICT revolution, the highest productivity growth rates are found in ICT-producing industries. It is argued that one explanation might be that hedonic price indexes are not used for the steam engine and the electric motor. Still, it is likely that the rate of technological development has been much more rapid during the ICT revolution as compared to any of the previous breakthroughs. In paper 2, Do Hedonic Price Indexes Change History? The Case of Electrification, I investigate whether hedonic price indexing would also have large effects on measured price and productivity during electrification. The hedonic methodology is used on historical data for electric motors in Sweden in 1900–35. The results show that PPI-deflated prices for electric motors decreased by 4.8 percent per year based on hedonic price indexes. This indicates that prices decreased considerably more for electric motors compared to total manufacturing. Annual labor productivity growth in Swedish electric machinery in 1919–29 becomes 12.1 percent if the hedonic deflators are used. Thus, there is strong evidence that productivity growth in the electric motor producing industry was very high during the 1920s. In contrast to Sweden, US annual labor productivity growth was only, according to current best estimates, 4.1 percent in electric machinery compared to 5.3 percent in manufacturing in 1919–29. However, hedonic price indexes were not used to calculate US productivity. Finally, it is shown that the price decreases for electric motors in the 1920s were not on par with the price decreases for ICT-equipment in the 1990s, even if hedonic indexing is used in both cases. Paper 3, Parallel Development? Productivity Growth Following Electrification and the ICT revolution, compares labor productivity growth and the contribution to labor productivity growth in Swedish manufacturing during electrification and the ICT revolution. The paper distinguishes between technology-producing, intensive and less intensive technology-using industries during these two technological breakthroughs. The results show that labor productivity growth and the overall contribution to labor productivity growth were considerably higher in technology-producing industries during the ICT revolution compared to electrification. For example, the relative contribution to labor productivity growth in manufacturing from the technology-producing industry was 3.4 percent in 1920–30 compared to 34.4 percent in 1993–2003. On the other hand, the relative contribution to aggregate labor productivity growth was considerably higher in intensive technology-using manufacturing industries during electrification. These findings have an important policy implication, namely that it is much more important how productivity is measured for ICT products in the 1990s than for electric motors in the 1920s. Paper 4, The Swedish ICT Miracle: Myth or Reality?, investigates productivity development in Sweden in the 1990s. The results show that much of the recorded Swedish surge in labor productivity was due to the spectacular growth of the Radio, television and communication equipment (RTC) industry. However, the productivity growth of the RTC industry is very sensitive to value added price deflators. Unlike Sweden, the US uses hedonic price indexes for semiconductors and microprocessors which are important intermediate inputs in the RTC industry. Estimates based on the US intermediate input price deflators for semiconductors and microprocessors suggest that the productivity growth of the Swedish RTC industry during the 1990s can be questioned. This implies that the productivity growth of total manufacturing has also been overestimated. The results for Sweden are also interesting for other countries such as Finland, Ireland and South Korea, where ICT-producing industries have contributed substantially to labor productivity growth / Diss. Stockholm : Handelshögskolan, 2006 S. 1-21: introduction and summary, s. 23-194: 4 papers
44

Application specific programmable processors for reconfigurable self-powered devices

Nyländen, T. (Teemu) 27 April 2018 (has links)
Abstract The current Internet of Things solutions for simple measurement and monitoring tasks are evolving into ubiquitous sensor networks that are constantly observing both our well being and the conditions of our living environment. The oncoming omnipresent wireless infrastructure is expected to feature artificial intelligence capabilities that can interpret human actions, gestures and even needs. All of this will require processing power on a par with and energy efficiency far beyond that of the current mobile devices. The current Internet of Things devices rely mostly on commercial low power off-the-shelf micro-controllers. Optimized solely for low power, while paying little attention to computing performance, the present solutions are far from achieving the energy efficiency, let alone, the compute capability requirements of the future Internet of Things solutions. Since this domain is application specific by nature, the use of general purpose processors for signal processing tasks is counterintuitive. Instead, dedicated accelerator based solutions are more likely to be able to meet these strict demands. This thesis proposes one potential solution for achieving the necessary low energy, as well as the flexibility and performance requirements of the Internet of Things domain in a cost effective manner using reconfigurable heterogeneous processing solutions. A novel graphics processing unit-style accelerator for the Internet of Things application domain is presented. Since the accelerator can be reconfigured, it can be used for most applications of the Internet of Things domain, as well as other application domains. The solution is assessed using two computer vision applications, and is demonstrated to achieve an excellent combination of performance and energy efficiency. The accelerator is designed using an efficient and rapid co-design flow of software and hardware, featuring ease of development characteristics close to commercial off-the-shelf solutions, which also enables cost-efficient design flow. / Tiivistelmä Esineiden internet tulee muuttamaan tulevaisuudessa elinympäristömme täysin. Se tulee mahdollistamaan interaktiiviset ympäristöt nykyisten passiivisten ympäristöjen sijaan. Lisäksi elinympäristömme tulee reagoimaan tekoihimme ja puheeseemme sekä myös tunteisiimme. Tämä kaikkialla läsnä olevan langaton infrastruktuuri tulee vaatimaan ennennäkemätöntä laskentatehokkuutta yhdistettynä äärimmäiseen energiatehokkuuteen. Nykyiset esineiden internet ratkaisut nojaavat lähes täysin kaupallisiin "suoraan hyllyltä" saataviin yleiskäyttöisiin mikrokontrollereihin. Ne ovat kuitenkin optimoituja pelkästään matalan tehonkulutuksen näkökulmasta, eivätkä niinkään energiatehokkuuden, saati tulevaisuuden esineiden internetin vaatiman laskentatehon suhteen. Kuitenkin esineiden internet on lähtökohtaisesti sovelluskohtaista laskentaa vaativa, joten yleiskäyttöisten prosessoreiden käyttö signaalinkäsittelytehtäviin on epäloogista. Sen sijaan sovelluskohtaisten kiihdyttimien käyttö laskentaan, todennäköisesti mahdollistaisi tavoitellun vaatimustason saavuttamisen. Tämä väitöskirja esittelee yhden mahdollisen ratkaisun matalan energian kulutuksen, korkean suorituskyvyn ja joustavuuden yhdenaikaiseen saavuttamiseen kustannustehokkaalla tavalla, käyttäen uudelleenkonfiguroitavia heterogeenisiä prosessoriratkaisuja. Työssä esitellään uusi grafiikkaprosessori-tyylinen uudelleen konfiguroitava kiihdytin esineiden internet sovellusalueelle, jota pystytään hyödyntämään useimpien laskentatehoa vaativien sovellusten kanssa. Ehdotetun kiihdyttimen ominaisuuksia arvioidaan kahta konenäkösovellusta esimerkkinä käyttäen ja osoitetaan sen saavuttavan loistavan yhdistelmän energia tehokkuutta ja suorituskykyä. Kiihdytin suunnitellaan käyttäen tehokasta ja nopeaa ohjelmiston ja laitteiston yhteissuunnitteluketjua, jolla voidaan saavuttaa lähestulkoon kaupallisten "suoraan hyllyltä" saatavien prosessoreiden kehitystyön helppous, joka puolestaan mahdollistaa kustannustehokkaan kehitys- ja suunnittelutyön.
45

Efficient Execution Of AMR Computations On GPU Systems

Raghavan, Hari K 11 1900 (has links) (PDF)
Adaptive Mesh Refinement (AMR) is a method which dynamically varies the spatio-temporal resolution of localized mesh regions in numerical simulations, based on the strength of the solution features. Due to high resolution discretization of localized regions of interests into rectangular mesh units called patches, AMR provides low cost of computations and high degree of accuracy. General purpose graphics processing units (GPGPUs) with their support for fine-grained parallelism, offer an attractive option for obtaining high performance for AMR applications. The data parallel computations of the finite difference schemes of AMR can be efficiently performed on GPGPUs. This research deals with challenges and develops techniques for efficient executions of AMR applications with uniform and non-uniform patches on GPUs. In the first part of the thesis, we optimize an AMR model with uniform patches. We have developed strategies for continuous online visualization of time evolving data for AMR applications executed on GPUs. In-situ visualization plays an important role for analyzing the time evolving characteristics of the domain structures. Continuous visualization of the output data for various time steps results in better study of the underlying domain and the model used for simulating the domain. We reorder the meshes for computations on the GPU based on the users input related to the subdomain that he wants to visualize. This makes the data available for visualization at a faster rate. We then perform asynchronous executions of the visualization steps and fix-up operations on the coarse meshes on the CPUs while the GPU advances the solution. By performing experiments on Tesla S1070 and Fermi C2070 clusters, we found that our strategies result in up to 60% improvement in response time and 16% improvement in the rate of visualization of frames over the existing strategy of performing fix-ups and visualization at the end of the time steps. The second part of the thesis deals with adaptive strategies for efficient execution of block structured AMR applications with non-uniform patches on GPUs. Most AMR approaches use patches of uniform sizes over regions of interests. Since this leads to over-refinement, some efforts have focused on forming patches of non-uniform dimensions to improve computational efficiency since the dimensions of a patch can be tuned to the geometry of a region of interest. While effective hybrid execution strategies exist for applications with uniform patches, our work considers efficient execution of non-uniform patches with different workloads. Our techniques include a geometric bin-packing method to load balance GPU computations and reduce thread idling, adaptive determination of amount of work to maximize asynchronism between CPU and GPU executions using a knapsack formulation, and scheduling communications for multi-GPU executions. We test our strategies for synthetic inputs as well as for traces from real applications. Our experiments on Tesla S1070 and Fermi C2070 clusters with both single-GPU and multi-GPU executions show that our strategies result in up to 69% improvement in performance over existing strategies. Our bin-packing based load balancing gives performance gains up to 39%, kernel optimizations give an improvement of up to 20%, and our strategies for adaptive asynchronism between CPU-GPU executions give performance improvements of up to 17% over default static asynchronous executions.
46

Incremental and developmental perspectives for general-purpose learning systems

Martínez Plumed, Fernando 07 July 2016 (has links)
[EN] The stupefying success of Artificial Intelligence (AI) for specific problems, from recommender systems to self-driving cars, has not yet been matched with a similar progress in general AI systems, coping with a variety of problems. This dissertation deals with the long-standing problem of creating more general AI systems, through the analysis of their development and the evaluation of their cognitive abilities. Firstly, this thesis contributes with a general-purpose learning system that meets several desirable characteristics in terms of expressiveness, comprehensibility and versatility. The system works with approaches that are inherently general: inductive programming and reinforcement learning. The system does not rely on a fixed library of learning operators, but can be endowed with new ones, so being able to operate in a wide variety of contexts. This flexibility, jointly with its declarative character, makes it possible to use the system as an instrument for better understanding the role (and difficulty) of the constructs that each task requires. The learning process is also overhauled with a new developmental and lifelong approach for knowledge acquisition, consolidation and forgetting, which is necessary when bounded resources (memory and time) are considered. Secondly, this thesis analyses whether the use of intelligence tests for AI evaluation is a much better alternative to most task-oriented evaluation approaches in AI. Accordingly, we make a review of what has been done when AI systems have been confronted against tasks taken from intelligence tests. In this regard, we scrutinise what intelligence tests measure in machines, whether they are useful to evaluate AI systems, whether they are really challenging problems, and whether they are useful to understand (human) intelligence. Finally, the analysis of the concepts of development and incremental learning in AI systems is done at the conceptual level but also through several of these intelligence tests, providing further insight for the understanding and construction of general-purpose developing AI systems. / [ES] El éxito abrumador de la Inteligencia Artificial (IA) en la resolución de tareas específicas (desde sistemas de recomendación hasta vehículos de conducción autónoma) no ha sido aún igualado con un avance similar en sistemas de IA de carácter más general enfocados en la resolución de una mayor variedad de tareas. Esta tesis aborda la creación de sistemas de IA de propósito general así como el análisis y evaluación tanto de su desarrollo como de sus capacidades cognitivas. En primer lugar, esta tesis contribuye con un sistema de aprendizaje de propósito general que reúne distintas ventajas como expresividad, comprensibilidad y versatilidad. El sistema está basado en aproximaciones de carácter inherentemente general: programación inductiva y aprendizaje por refuerzo. Además, dicho sistema se basa en una biblioteca dinámica de operadores de aprendizaje por lo que es capaz de operar en una amplia variedad de contextos. Esta flexibilidad, junto con su carácter declarativo, hace que sea posible utilizar el sistema de forma instrumental con el objetivo de facilitar la comprensión de las distintas construcciones que cada tarea requiere para ser resuelta. Por último, el proceso de aprendizaje también se revisa por medio de un enfoque evolutivo e incremental de adquisición, consolidación y olvido de conocimiento, necesario cuando se trabaja con recursos limitados (memoria y tiempo). En segundo lugar, esta tesis analiza el uso de tests de inteligencia humana para la evaluación de sistemas de IA y plantea si su uso puede constituir una alternativa válida a los enfoques actuales de evaluación de IA (más orientados a tareas). Para ello se realiza una exhaustiva revisión bibliográfica de aquellos sistemas de IA que han sido utilizados para la resolución de este tipo de problemas. Esto ha permitido analizar qué miden realmente los tests de inteligencia en los sistemas de IA, si son significativos para su evaluación, si realmente constituyen problemas complejos y, por último, si son útiles para entender la inteligencia (humana). Finalmente se analizan los conceptos de desarrollo cognitivo y aprendizaje incremental en sistemas de IA no solo a nivel conceptual, sino también por medio de estos problemas mejorando por tanto la comprensión y construcción de sistemas de propósito general evolutivos. / [CAT] L'èxit aclaparant de la Intel·ligència Artificial (IA) en la resolució de tasques específiques (des de sistemes de recomanació fins a vehicles de conducció autònoma) no ha sigut encara igualat amb un avanç similar en sistemes de IA de caràcter més general enfocats en la resolució d'una major varietat de tasques. Aquesta tesi aborda la creació de sistemes de IA de propòsit general així com l'anàlisi i avaluació tant del seu desenvolupament com de les seues capacitats cognitives. En primer lloc, aquesta tesi contribueix amb un sistema d'aprenentatge de propòsit general que reuneix diferents avantatges com ara expressivitat, comprensibilitat i versatilitat. El sistema està basat en aproximacions de caràcter inherentment general: programació inductiva i aprenentatge per reforç. A més, el sistema utilitza una biblioteca dinàmica d'operadors d'aprenentatge pel que és capaç d'operar en una àmplia varietat de contextos. Aquesta flexibilitat, juntament amb el seu caràcter declaratiu, fa que siga possible utilitzar el sistema de forma instrumental amb l'objectiu de facilitar la comprensió de les diferents construccions que cada tasca requereix per a ser resolta. Finalment, el procés d'aprenentatge també és revisat mitjançant un enfocament evolutiu i incremental d'adquisició, consolidació i oblit de coneixement, necessari quan es treballa amb recursos limitats (memòria i temps). En segon lloc, aquesta tesi analitza l'ús de tests d'intel·ligència humana per a l'avaluació de sistemes de IA i planteja si el seu ús pot constituir una alternativa vàlida als enfocaments actuals d'avaluació de IA (més orientats a tasques). Amb aquesta finalitat, es realitza una exhaustiva revisió bibliogràfica d'aquells sistemes de IA que han sigut utilitzats per a la resolució d'aquest tipus de problemes. Açò ha permès analitzar què mesuren realment els tests d'intel·ligència en els sistemes de IA, si són significatius per a la seua avaluació, si realment constitueixen problemes complexos i, finalment, si són útils per a entendre la intel·ligència (humana). Finalment s'analitzen els conceptes de desenvolupament cognitiu i aprenentatge incremental en sistemes de IA no solament a nivell conceptual, sinó també per mitjà d'aquests problemes millorant per tant la comprensió i construcció de sistemes de propòsit general evolutius. / Martínez Plumed, F. (2016). Incremental and developmental perspectives for general-purpose learning systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/67269 / TESIS
47

Matematický model univerzální stanice v laboratoři VUT FSI OFI. / Mathematical model of circuit in laboratory VUT FSI OFI.

Klapal, Tomáš January 2008 (has links)
This diploma thesis deals with project and experimental verification of mathematical model of experimental circuit for measuring turbine in laboratory of Kaplan Department of Fluids Engineering FME BUT in Brno. Pressure and flow characteristics were modeled based on data measured on general-purpose experimental circuit. Possibilities of the circuit control, mainly by by-pass, were also taken into account. Characteristic curves should serve for preliminery design of the test circuit set up for the turbine model measurements.
48

Design and Performance Analysis of Parallel Processing of SRTP Packets / Design and Performance Analysis of Parallel Processing of SRTP Packets

Wozniak, Jan January 2013 (has links)
Šifrování multimediálních datových přenosů v reálném čase je jednou z úloh telekomunikační infrastruktury pro dosažení nezbytné úrovně zabezpečení. Rychlost provedení šifrovacího algoritmu může hrát klíčovou roli ve velikosti zpoždění jednotlivých paketů a proto je tento úkol zajímavým z hlediska optimalizačních metod. Tato práce se zaměřuje na možnosti paralelizace zpracování SRTP pro účely telefonní ústředny s využitím OpenCL frameworku a následnou analýzu potenciálního zlepšení.
49

Využití GPU pro akcelerované zpracování obrazu / Image Processing on GPUs

Bačík, Ladislav January 2008 (has links)
This master thesis deals with modern technologies in graphic hardware and using their for general purpose computing. It is primary focused on architecture of unified processors and algorithm implementation via CUDA programming interface. Thesis base is to choose suited algorithm for GPU horsepower demonstration. Main aim of this work is implementation of multiplatform library offering algorithms for discrete volumetric data vectorization. For this purpose was chosen algorithm Marching cubes that is able to find surface of processed object. In created library will be contained algorithm runnable on graphic device and also one runnable on CPU. Finally we compare both variants and discuss their pros and cons.
50

Real-Time Linux Testbench on Raspberry Pi 3 using Xenomai

Johansson, Gustav January 2018 (has links)
Test benches are commonly used to simulate events to an embedded system for validation purposes. Microcontrollers can be used for making test benches and can be programmed with a bare-metal style, i.e. without an Operating System (OS), for simple cases. If the test bench would be too complex for a microcontroller, then a Real-Time Operating System (RTOS) could be used instead of a more complex hardware. A RTOS has limited functionalities to guarantee high predictability. A General-Purpose Operating System (GPOS) has a vast number of functionalities but has low predictability. The literature study looks therefore into approaches to improve the real-time predictability of Linux. The result of the literature study finds an approach called Xenomai Cobalt to be the optimal solution, considering the target usecase and project resources. The Xenomai Cobalt approach was evaluated on a Raspberry Pi (RPi) 3 using its General-Purpose Input/Output (GPIO) pins and a latency test. An application was written using Xenomai's Application Programming Interface (API). The application used the GPIO pins to read from a function generator and to write to an oscilloscope. The measurements from the oscilloscope were then compared to the measurements done by the application. The result showed the measured dierences between the RPi 3 and the oscilloscope. The result of the measurements showed that reading varied 66:20 μs, and writing varied 56:20 μs. The latency test was executed with a stress test and the worst measured latency was 82 μs. The resulting measured dierences were too high for the project requirements. However, the majority of the measurements were much smaller than the worstcases with 23:52 μs for reading and 34:05 μs for writing. This means the system could be used better as a rm real-time system instead of a hard real-time system. / Testbänkar används ofta för att simulera händelser till ett inbyggt system för validering. Till simpla testbänkar kan mikrokontroller användas. För mer avancerade testbänkar kan RTOS användas på mer komplex hårdvara. RTOS har begränsad funktionalitet för att garantera en hög förutsägbarhet. GPOS har stora mängder funktionaliteter men har istället en låg förutsägbarhet.Litteraturstudien undersökte därför möjligheterna till att få Linux att hantera realtid. Resultatet av litteraturstudien fann ett tillvägagångssätt vid namn Xenomai Cobalt att vara den optimala lösningen för att få Linux till Real-Time Linux.Xenomai Cobalt utvärderades på en RPi 3 med hjälp av dess GPIO-pinnar och ett fördröjningstest. En applikation skrevs med Xenomai’s API. Applikationen använde GPIO-pinnarna till att läsa från en funktionsgenerator och till att skriva till ett oskilloskop. Mätningarna från oskilloskopet jämfördes sen med applikationens mätningar.Resultatet visade mätskillnaderna mellan RPi 3 och oskilloskopet med systemet i viloläge. Resultatet av mätningarna visade att läsningen varierade med 66.20 µs och skrivandet med 56.20 µs. Fördröjningstestet utfördes medstresstestning och visade den värsta uppmätta fördröjningen, resultatet blev82 µs.De resulterande mätskillnaderna blev dock för höga för projektets krav. Majoriteten av mätningarna var mycket mindre än de värsta fallen med 23.52 µs för läsning och 34.05 µs för skrivning. Detta innebar att systemet kan användas med bättre precision som ett fast realtidssystem istället för ett hårt realtidssystem.

Page generated in 0.0689 seconds