• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 917
  • 171
  • 61
  • 32
  • 32
  • 26
  • 23
  • 16
  • 16
  • 12
  • 8
  • 8
  • 7
  • 6
  • 5
  • Tagged with
  • 1717
  • 570
  • 459
  • 415
  • 343
  • 266
  • 263
  • 218
  • 195
  • 195
  • 170
  • 155
  • 150
  • 146
  • 139
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Towards Secure Collaborative AI Service Chains

Ahmadi Mehri, Vida January 2019 (has links)
At present, Artificial Intelligence (AI) systems have been adopted in many different domains such as healthcare, robotics, automotive, telecommunication systems, security, and finance for integrating intelligence in their services and applications. The intelligent personal assistant such as Siri and Alexa are examples of AI systems making an impact on our daily lives. Since many AI systems are data-driven systems, they require large volumes of data for training and validation, advanced algorithms, computing power and storage in their development process. Collaboration in the AI development process (AI engineering process) will reduce cost and time for the AI applications in the market. However, collaboration introduces the concern of privacy and piracy of intellectual properties, which can be caused by the actors who collaborate in the engineering process.  This work investigates the non-functional requirements, such as privacy and security, for enabling collaboration in AI service chains. It proposes an architectural design approach for collaborative AI engineering and explores the concept of the pipeline (service chain) for chaining AI functions. In order to enable controlled collaboration between AI artefacts in a pipeline, this work makes use of virtualisation technology to define and implement Virtual Premises (VPs), which act as protection wrappers for AI pipelines. A VP is a virtual policy enforcement point for a pipeline and requires access permission and authenticity for each element in a pipeline before the pipeline can be used.  Furthermore, the proposed architecture is evaluated in use-case approach that enables quick detection of design flaw during the initial stage of implementation. To evaluate the security level and compliance with security requirements, threat modeling was used to identify potential threats and vulnerabilities of the system and analyses their possible effects. The output of threat modeling was used to define countermeasure to threats related to unauthorised access and execution of AI artefacts.
22

Investigating and Deploying an AI model on Raspberry Pi IoT platform using FIWARE and Docker in the context of the Bonseyes AI Ecosystem

Mamidi, Sai Prakash, Ummadisetty, Yogitha Manasa January 2019 (has links)
Many high-end sophisticated devices are being replaced by small single-board IoT devices. It is estimated that by the year 2025 there will be about 75 billion IoT connected devices worldwide. This project buttresses the earlier statements. The project will be focused on how to deploy a complex AI face-recognition model on a simple, less power consumption, single-board IoT device Raspberry Pi3. Also, to simplify the whole process of deploying the AI model on Raspberry Pi3 by dockerizing. By just pulling the respective docker image and running it on the end device (Raspberry Pi3), we can make face recognition happen. Finally, the obtained results will be sent to the cloud (Fiware) using NGSI API. Instead of sending the whole video feed to the Cloud platform and performing the computing there, we managed to perform the computing at the edge and then send the results to the cloud and making the results accessible to requested users.
23

Adaptive Wireless Transmission System

Ericsson, Elias January 2019 (has links)
No description available.
24

Performance Evaluation of Multicast Behavior in Congested Networks

Jyothula, Urmila January 2019 (has links)
Compuverde’s software-defined storage product uses multicast for the communication between servers in a cluster. The product makes use of IP UDP multicast for sending status messages between the servers that forms the storage cluster. The storage clusters capacity and performance scales linear to the number of servers in the cluster. The problem is that the multicast traffic also increases with the number of nodes. All nodes send to all other nodes in the cluster. In this document, we present a proposal on evaluation of IP multicast behavior in a network congested with traffic similar to that produced by Compuverde’s product. IP multicast is a method of sending Internet Protocol (IP) datagrams to a group of interested receivers in a single transmission. In order to provide an efficient, timely, and global many-to-many distribution of data, and as such may become the broadcast medium of choice in the future, IP multicasting is used[1]. The main benefit of IP Multicast is that it reduces the bandwidth consumption when data from a sender must reach multiple receivers. We are interested in studying the effects on the network when we send multicast packets at a rate closed to the operational limit of the switch. To be able to study this behavior at larger scale Compuverde’s will provide a cluster with 48 servers all connected to the same switch. In addition, we will compare the behavior of IPv4 multicast traffic to that of IPv6. Aims and Objectives: Our aim of my thesis is mainly to focus on IP multicast and compare the IPv4 multicast performance results to the results from IPv6 multicast. In addition, a C++ tool for generating multicast traffic will be developed on Linux. A detailed study on IP multicast (IPv4, IPv6). Detailed study on the design and efficient implementation of a multicast traffic generating tool. Detailed study on the switch that will be used in the project. Additional switches may be provided by BTH. Detailed study on the pattern of dropped packets when traffic rate approaches operational limit and other related impairments on QoS metrics (e.g., CPU utilization). Methods: The method is to develop a tool that will generate multicast load towards servers in a cluster. The data sent as multicast packets shall consist of information that will make it possible to detect packet loss on the receiving servers if the network gets congested. The first version of the tool shall use existing socket classes that are based on the IPv4 protocol and shall be written in C++. The tool shall be able to run in two modes at the same time: client mode and server mode. The server part of the tool shall subscribe to a predefined multicast address and receive incoming multicast packages. The client part of the tool shall send data packages to the same predefined multicast address at a configurable rate that will increase over time. The data in the packet that will be sent shall be constructed in a way that lets the receiver (server) detect if a packet is lost in transmission. The load should start small with a small number of servers in the cluster, and then in steps scale up the number of servers, until a maximum of 48 servers is reached. The rate that the multicast packets is sent should also be increased, until the switch gets overloaded and starts to drop packets. The pattern of how packets are dropped should be observed. For example such as, if it is biggerlarge chunks of packets that get dropped or if it is every second packet that gets dropped. The second version of the tool shall support IPv6 multicast. The second round of tests should be performed in a way that makes them comparable to the results from the IPv4 tests so it is possible to draw conclusions if one protocol performs better or is more reliable. Result: The maximum number of IPv4 packets a switch can handle is 140 packets per second. The maximum number of IPv6 packets a switch can handle is 6 packets per second. The CPU utilization is more while multicasting the IPv4 packets than while multicasting IPv6 packets by using switch, 95 Nodes. Conclusion: The IPv4 is most efficient protocol than IPv6 protocol while sending the packets at very high data rate. The CPU utilization is more higher for sending with the IPv6 protocol packets than with the IPv4 protocol. / <p>no</p>
25

Optimization of 5G New Radio for Fixed Wireless Access

Palm, Jonathan January 2019 (has links)
With the advent of new 5G networks, the interest in connecting house-hold to the Internet via mobile networks has increased. One such way toconnect users is using completely stationary antennas. This use-case iscalled Fixed Wireless Access (FWA) and is seen as promising, cost-efficient means of expanding internet connectivity. Stationary users connected at high frequencies, such as 28 GHz, leads to a special use-case and environment for 5G New Radio (NR). This thesis investigates the characteristics of these FWA deployments and the control signaling on the physical layer of NR. The overhead and feasibility of eachsignal is considered. A FWA deployment in the 28 GHz band with 64 users is simulated with different line-of-sight settings and receiver placements. It is concluded that direct line-of-sight to the base station is vital for high user and cell throughput and that there are significant drawbacks of placing the receiver indoors. New algorithms for Channel State Information Reference Signal (CSI-RS) transmission for both beam management and link adaptation are proposed and evaluated. The beam management algorithms do not displayany significant performance gains over the default sweeping algorithm. Closer investigation of simulation results shows that several beams can have almost equal signal strength with the chosen antenna set up, minimizing potential gains of quickly adapting to environmental changes. Results show there are clear benefits of using an aperiodic and adaptive transmission scheme for CSI-RS transmissions over a fixed-rate transmission scheme, yielding a 7% increase in user goodput at similar levels of overhead.
26

Analysis of Alternative Massive MIMO Designs : Superimposed Pilots and Mixed-ADCs

Verenzuela, Daniel January 2018 (has links)
The development of information and communication technologies (ICT) provides the means for reaching global connectivity that can help humanity progress and prosper. This comes with high demands on data traffic and number of connected devices which are rapidly growing and need to be met by technological development. Massive MIMO, where MIMO stands for multiple-input multiple-output, is envisioned as a fundamental component of next generation wireless communications for its ability to provide high spectral and energy efficiency, SE and EE, respectively. The key feature of this technology is the use of a large number of antennas at the base stations (BS) to spatially multiplex several user equipments (UEs). In the development of new technologies like Massive MIMO, many design alternatives need to be evaluated and compared in order to find the best operating point with a preferable tradeoff between high performance and low cost. In this thesis, two alternative designs for signal processing and hardware in Massive MIMO are studied and compared with the baseline operation in terms of SE, EE, and power consumption. The first design is called superimposed pilot (SP) transmission and is based on superimposing pilot and data symbols to remove the overhead from pilot transmission and reduce pilot contamination. The second design is mixed analog-to-digital converters (ADCs) and it aims at balancing high performance and low complexity by allowing different ADC bit resolutions across the BS antennas. The results show that the baseline operation of Massive MIMO, properly optimized, is the preferred choice. However, SP and mixed ADCs still have room for improvement and further study is needed to ascertain the full capabilities of these alternative designs. / <p>Mindre typografiska fel är korrigerade i den elektroniska versionen. / Minor typographic errors are corrected in the electronic version.</p>
27

Embedded watermarking for image verification in telemedicine

Osborne, Dominic January 2005 (has links)
Wireless communication technology has provided increased opportunity for applications such as telemedicine. This work focuses on the end application of teleradiology, targeting the communication of digital diagnostic images to remote locations for diagnosis and treatment. Medical images have conventionally been of large size and stored without loss of redundancy. Recent research has demonstrated that acceptable levels of Joint Picture Experts Group ( JPEG ) compression may be used on these image types without loss of diagnostic content. This has provided an opportunity for more rapid image transmission in wireless environments. One of the most pressing challenges that remain are techniques to verify the integrity of crucial diagnostic feature information that may be compromised with excessive use of standard compression methods. An authentication watermarking technique is presented, which extracts critical feature information from the Region of Interest ( ROI ) and embeds a series of robust watermarks into the Region of Backgrounds ( ROB ) surrounding this location. This thesis will consider only the effects of distortions due to compression standards and presents a body of work that is a step towards a future study for considering compression together with channel noise introduced by the wireless environment. The following key contributions have been made in this thesis : 1. A novel technique to provide crucial feature authentication without introducing embedding distortions into these regions by using multiple robust watermarks 2. Improved performance over earlier methods including superior robustness to DCT quantisation and complete JPEG image compression. Image fidelity is significantly improved with less distortion introduced. Smaller signatures can be used to authenticate essential image information than with conventional methods, decreasing overall system complexity 3. Optimised JPEG survival levels that allow permissable JPEG compression levels to be specified. / Thesis (Ph.D.)--Electrical and Electronic Engineering, 2005.
28

Towards Affordable Provisioning Strategies for Local Mobile Services in Dense Urban Areas : A Techno-economic Study

Ahmed, Ashraf Awadelkarim Widaa January 2017 (has links)
The future mobile communication networks are expected to cope with growing local usage patterns especially in dense urban areas at more affordable deployment and operation expenses. Beyond leveraging small cell architectures and advanced radio access technologies; more radio spectrum are expected to be required to achieve the desired techno-economic targets. Therefore, the research activity has been directed towards discussing the benefits and needs for more flexible and local spectrum authorization schemes. This thesis work is meant to be a contribution to this ongoing discussion from a techno-economic perspective.   In chapter three, the engineering value of the different flexible authorization options are evaluated from the perspective of established mobile network operators using the opportunity cost approach. The main results in chapter three indicate the economic incentives to deploy more small cells based on flexible spectrum authorization options are subject to the potential saving in the deployment and operation costs. Nonetheless; high engineering value can be anticipated when the density of small cells is equal or larger than the active mobile subscribers’ density.   While in chapter four, the possible local business models around different flexible authorization options are investigated from the perspective of emerging actors with limited or ’no’ licensed spectrum resources. In this context, dependent or independent local business can be identified according to surrounding spectrum regulations. On possible independent local business models for those emerging actors is to exploit the different flexible spectrum authorization options to provision tailored local mobile services. Other viable dependent local business models rest with the possibility to enter into different cooperation agreements to deploy and operate dedicated local mobile infrastructure on behalf established mobile network operators. / <p>QC 20170510</p>
29

Gaining influence in standard-setting processes : a discussion of underlying mechanisms in 3G mobile telephony technology development /

Grundström, Christina, January 1900 (has links)
Diss. Linköping : Univ., 2004. / Härtill 7 uppsatser.
30

National politics and international agreements : British strategies in regulating European telephony, 1923-39 /

Jeding, Carl, January 1900 (has links)
Lic.-avh. Uppsala : Univ.

Page generated in 0.1047 seconds