• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 369
  • 356
  • 40
  • 34
  • 34
  • 32
  • 30
  • 28
  • 8
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 1072
  • 1072
  • 331
  • 274
  • 193
  • 134
  • 117
  • 97
  • 92
  • 91
  • 77
  • 74
  • 72
  • 72
  • 65
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Domain Expertise–Agnostic Feature Selection for the Analysis of Breast Cancer Data

Pozzoli, Susanna January 2019 (has links)
At present, high-dimensional data sets are becoming more and more frequent. The problem of feature selection has already become widespread, owing to the curse of dimensionality. Unfortunately, feature selection is largely based on ground truth and domain expertise. It is possible that ground truth and/or domain expertise will be unavailable, therefore there is a growing need for unsupervised feature selection in multiple fields, such as marketing and proteomics.Now, unlike in past time, it is possible for biologists to measure the amount of protein in a cancer cell. No wonder the data is high-dimensional, the human body is composed of thousands and thousands of proteins. Intuitively, only a handful of proteins cause the onset of the disease. It might be desirable to cluster the cancer sufferers, but at the same time we want to find the proteins that produce good partitions.We hereby propose a methodology designed to find the features able to maximize the clustering performance. After we divided the proteins into different groups, we clustered the patients. Next, we evaluated the clustering performance. We developed a couple of pipelines. Whilst the first focuses its attention on the data provided by the laboratory, the second takes advantage both of the external data on protein complexes and of the internal data. We set the threshold of clustering performance thanks to the biologists at Karolinska Institutet who contributed to the project.In the thesis we show how to make a good selection of features without domain expertise in case of breast cancer data. This experiment illustrates how we can reach a clustering performance up to eight times better than the baseline with the aid of feature selection. / Högdimensionella dataseter blir allt vanligare. Problemet med funktionsval har redan blivit utbrett på grund av dimensionalitetens förbannelse. Dessvärre är funktionsvalet i stor utsträckning baserat på grundläggande sanning och domänkunskap. Det är möjligt att grundläggande sanning och/eller domänkunskap kommer att vara otillgänglig, därför finns det ett växande behov av icke-övervakat funktionsval i flera områden, såsom marknadsföring och proteomics.I nuläge, till skillnad från tidigare, är det möjligt för biologer att mäta mängden protein i en cancercell. Inte undra på att data är högdimensionella, människokroppen består av tusentals och tusentals proteiner. Intuitivt orsakar bara en handfull proteiner sjukdomsuppkomsten. Det kan vara önskvärt att klustrera cancerlidarna, men samtidigt vill vi hitta proteiner som producerar goda partitioner.Vi föreslår härmed en metod som är utformad för att hitta funktioner som kan maximera klustringsprestandan. Efter att vi delat proteinerna i olika grupper klustrade vi patienterna. Därefter utvärderade vi klustringsprestandan. Vi utvecklade ett par pipelines. Medan den första fokuserar på de data som laboratoriet tillhandahåller, utnyttjar den andra både extern data på proteinkomplex och intern data. Vi ställde gränsen för klusterprestationen tack vare biologerna vid Karolinska Institutet som bidragit till projektet.I avhandlingen visar vi hur man gör ett bra utbud av funktioner utan domänkompetens vid bröstcancerdata. Detta experiment illustrerar hur vi kan nå en klusterprestation upp till åtta gånger bättre än baslinjen med hjälp av funktionsval.
452

Investigating differences in response time and error rate between a monolithic and a microservice based architecture

Johansson, Gustav January 2019 (has links)
With great advancements in cloud computing, the microservice architecture has become a promising architectural style for enterprise software. It has been proposed to cope with problems of the traditional monolithic architecture which includes slow release cycles, limited scalability and low developer productivity. Therefore, this thesis aims to investigate the affordances and challenges of adopting microservices as well as the difference in performance compared to the monolithic approach at one of Sweden’s largest banks, SEB - the Scandinavian Individual Bank. The investigation consisted of a literature study of research papers and official documentation of microservices. Moreover, two applications were developed and deployed using two different system architectures - a monolithic architecture and a microservice architecture. Performance tests were executed on both systems to gather quantitative data for analysis. The two metrics investigated in this study were response time and error rate. The results indicate the microservice architecture has a significantly higher error rate but a slower response time than the monolithic approach, further strengthening the results of Ueda et. al. [47] and Villamizar et. al. [48]. The findings have then been discussed with regards to the challenges and complexity involved in implementing distributed systems. From this study, it becomes clear the complexity shifts from inside the application out towards infrastructure with a microservice architecture. Therefore, microservices should not be seen as a silver bullet. Rather, the type of architecture is highly dependent on the scope of the project and the size of the organization. / Med stora framstegen inom molntjänster har microservice arkitekturen kommit att bli en lämplig kandidat för utveckling av företagsprogramvara. Denna typ av systemarkitektur har föreslagits att lösa de problem som den traditionella monolitiska arkitekturen medför; långsamma lanseringar, begränsad skalbarhet och låg produktivitet. Således fokuserar denna avhandling på att utforska de möjligheter samt utmaningar som följer vid adoptering av microservices samt skillnaden i prestanda jämfört med den monolitiska arkitekturen. Detta undersöktes på en av Sveriges största banker, SEB, den Skandinaviska Enskilda Banken. Utredningen bestod av en litteraturstudie av vetenskapliga artiklar samt officiell dokumentation för microservices. Dessutom utvecklades och lanserades två applikationer byggt med två olika typer av systemarkitektur - en som monolitisk arkitektur och den andra som en microservice arkitektur. Prestandatest utfördes sedan på båda systemen för att samla kvantitativ data för analys. De två nyckelvardena som undersöktes i denna studie var responstid och felfrekvens. Resultaten indikerar att microservice arkitekturen har en signifikant högre felfrekvens men en långsammare responstid än den monolitiska arkitekturen, vilket stärker resultaten av Ueda et. al. [47] och Villamizar et. al. [48]. Forskningsresultaten har diskuterats med hänsyn till den komplexitet och de utmaningar som följer vid implementering av distribuerade system. Från denna studie blir det tydligt att komplexiteten i en microservice arkitektur skiftar från inuti applikationen ut till infrastrukturen. Således borde microservices inte ses som en silverkula. Istället är valet av systemarkitektur strikt beroende på omfattningen av projektet samt storleken på organisationen i fråga.
453

Deep Learning Optimization and Acceleration

Jiang, Beilei 08 1900 (has links)
The novelty of this dissertation is the optimization and acceleration of deep neural networks aimed at real-time predictions with minimal energy consumption. It consists of cross-layer optimization, output directed dynamic quantization, and opportunistic near-data computation for deep neural network acceleration. On two datasets (CIFAR-10 and CIFAR-100), the proposed deep neural network optimization and acceleration frameworks are tested using a variety of Convolutional neural networks (e.g., LeNet-5, VGG-16, GoogLeNet, DenseNet, ResNet). Experimental results are promising when compared to other state-of-the-art deep neural network acceleration efforts in the literature.
454

Work-family Conflict And Performance Evaluations: Who Gets A Break?

Hickson, Kara 01 January 2008 (has links)
Forty percent of employed parents report that they experience work-family conflict (Galinsky, Bond, & Friedman, 1993). Work-family conflict (WFC) exists when role pressures from the work and family domains are mutually incompatible. WFC is associated with decreases in family, job, and life satisfaction and physical health; intention to quit one's job; and increases in workplace absenteeism. Women may be more impacted by WFC than men, as women report completing 65-80% of the child care (Sayer, 2001) and spend 80 hours per week fulfilling work and home responsibilities (Cowan, 1983). Research suggests that WFC can be reduced with social support, such as co-workers providing assistance when family interferes with work (Carlson & Perrewe, 1999). It is unclear whether parents 'get a break' or are penalized by co-workers. The purpose of the present study was to examine co-workers' reactions to individuals who experience WFC. Based on sex role theory and attribution theory, it was predicted that women, people who experience family interference with work, and those who have more control over the work interference would be helped less and evaluated more poorly on a team task than men, people who experience non-family related work interference, and those who have less control over the work interference. A laboratory experiment was conducted in which participants signed up for a team-based study. The teammate was a confederate who was late for the study. Teammate control over the tardiness (unexpected physician's visit versus forgotten physician's appointment), type of work conflict (self- versus family-related), and gender of the teammate were manipulated. After learning about the reasons for the tardiness of their teammate, the 218 participants (63% female; 59% Caucasian) decided whether to help the late teammate by completing a word sort task for them or letting the late teammate make up the work after the experiment. When the teammate arrived, the participants completed a team task and then evaluated the task performance of their teammate. None of the hypotheses were confirmed in this study. However, exploratory analyses showed that people who had more control over the tardiness were rated lower than people who had less control over the tardiness. Contrary to expectations, exploratory analyses also showed that men rated women who were late to the study for a family-related reason higher than women who were late due to a self-related reason. These findings suggest that male co-workers may give women a break when they experience family interference with work. Implications for future research and practice are discussed.
455

Enabling Peer-to-Peer Swarming for Multi-Commodity Dissemination

Menasche, Daniel Sadoc 13 May 2011 (has links)
Peer-to-peer swarming, as used by BitTorrent, is one of the de facto solutions for content dissemination in today’s Internet. By leveraging resources provided by users, peer-to-peer swarming is a simple, scalable and efficient mechanism for content distribution. Although peer-to-peer swarming has been widely studied for a decade, prior work has focused on the dissemination of one commodity (a single file). This thesis focuses on the multi-commodity case. We have discovered through measurements that a vast number of publishers currently disseminate multiple files in a single swarm (bundle). The first contribution of this thesis is a model for content availability. We use the model to show that, when publishers are intermittent, bundling K files increases content availability exponentially as function of K. When there is a stable publisher, we consider content availability among peers (excluding the publisher). Our second contribution is the estimate of the dependency of peers on the stable publisher, which is useful for provisioning purposes as well as in deciding how to bundle. To this goal, we propose a new metric, swarm self-sustainability, and present a model that yields swarm self-sustainability as a function of the file size, popularity and service capacity of peers. Then, we investigate reciprocity and the use of barter that occurs among peers. As our third contribution, we prove that the loss of efficiency due to the download of unrequested content to enforce direct reciprocity, as opposed to indirect reciprocity, is at most two in a class of networks without relays. Finally, we study algorithmic and economic problems faced by enterprises who leverage swarming systems and who control prices and bundling strategies. As our fourth contribution, we present two formulations of the optimal bundling problem, and prove that one is NP hard whereas the other is solvable by a greedy strategy. From an economic standpoint, we present conditions for the existence and uniqueness of an equilibrium between publishers and peers.
456

Performance Evaluation of Various QUIC Implementations : Performance and Sustainability of QUIC Implementations on the Cloud

Sitepu, Feter Akira Vedaalana January 2022 (has links)
QUIC is a new secure multiplexed transport protocol built on top of UDP. This general-purpose transport protocol aims to provide the lowest connection latency possible and solve the shortcomings of TCP, UDP, and current problems of the internet. Furthermore, it allows further development of the transport protocol without upgrading the network infrastructure. Last year in May 2021, QUIC was finally standardized by the IETF, allowing for full development and release while also opening the path for future research as older research dated due to using the older version and the finalization of QUIC standard protocol. While there are a lot of different QUIC implementations, this thesis selected two and conducted a performance evaluation on the cloud environment and compared the two while also taking the sustainability aspect into account. Asa result, we will find which of the selected implementation is environmentally friendly through this experiment while also providing good performance. / <p>2022 GENIAL Summer School</p>
457

Call admission control using cell breathing concept for wideband CDMA

Mishra, Jyoti L., Dahal, Keshav P., Hossain, M. Alamgir January 2006 (has links)
This paper presents a Call Admission Control (CAC) algorithm based fuzzy logic to maintain the quality of service using cell breathing concept. When a new call is accepted by a cell, its current user is generally affected due to cell breathing. The proposed CAC algorithm accepts a new call only if the current users in the cell are not jeopardized. Performance evaluation is done for single-cell and multicell scenarios. In multicell scenario dynamic assignment of users to the neighboring cell, so called handoff, has been considered to achieve a lower blocking probability. Handoff and new call requests are assumed with handoff being given preference using a reserved channel scheme. CAC for different types of services are shown which depend upon the bandwidth requirement for voice, data and video. Distance, arrival rate, bandwidth and nonorthogonality factor of the signal are considered for making the call acceptance decision. The paper demonstrates that fuzzy logic with the cell breathing concept can be used to develop a CAC algorithm to achieve a better performance evaluation.
458

Localized Quality of Service Routing Algorithms for Communication Networks. The Development and Performance Evaluation of Some New Localized Approaches to Providing Quality of Service Routing in Flat and Hierarchical Topologies for Computer Networks.

Alzahrani, Ahmed S. January 2009 (has links)
Quality of Service (QoS) routing considered as one of the major components of the QoS framework in communication networks. The concept of QoS routing has emerged from the fact that routers direct traffic from source to destination, depending on data types, network constraints and requirements to achieve network performance efficiency. It has been introduced to administer, monitor and improve the performance of computer networks. Many QoS routing algorithms are used to maximize network performance by balancing traffic distributed over multiple paths. Its major components include bandwidth, delay, jitter, cost, and loss probability in order to measure the end users¿ requirements, optimize network resource usage and balance traffic load. The majority of existing QoS algorithms require the maintenance of the global network state information and use it to make routing decisions. The global QoS network state needs to be exchanged periodically among routers since the efficiency of a routing algorithm depends on the accuracy of link-state information. However, most of QoS routing algorithms suffer from scalability problems, because of the high communication overhead and the high computation effort associated with marinating and distributing the global state information to each node in the network.The goal of this thesis is to contribute to enhancing the scalability of QoS routing algorithms. Motivated by this, the thesis is focused on localized QoS routing that is proposed to achieve QoS guarantees and overcome the problems of using global network state information such as high communication overhead caused by frequent state information updates, inaccuracy of link-state information for large QoS state update intervals and the route oscillating due to the view of state information. Using such an approach, the source node makes its own routing decisions based on the information that is local to each node in the path. Localized QoS routing does not need the global network state to be exchanged among network nodes because it infers the network state and avoids all the problems associated with it, like high communication and processing overheads and oscillating behaviour. In localized QoS routing each source node is required to first determine a set of candidate paths to each possible destination. In this thesis we have developed localized QoS routing algorithms that select a path based on its quality to satisfy the connection requirements. In the first part of the thesis a localized routing algorithm has been developed that relies on the average residual bandwidth that each path can support to make routing decisions. In the second part of the thesis, we have developed a localized delay-based QoS routing (DBR) algorithm which relies on a delay constraint that each path satisfies to make routing decisions. We also modify credit-based routing (CBR) so that this uses delay instead of bandwidth. Finally, we have developed a localized QoS routing algorithm for routing in two levels of a hierarchal network and this relies on residual bandwidth to make routing decisions in a hierarchical network like the internet. We have compared the performance of the proposed localized routing algorithms with other localized and global QoS routing algorithms under different ranges of workloads, system parameters and network topologies. Simulation results have indicated that the proposed algorithms indeed outperform algorithms that use the basics of schemes that currently operate on the internet, even for a small update interval of link state. The proposed algorithms have also reduced the routing overhead significantly and utilize network resources efficiently.
459

Localised Routing Algorithms in Communication Networks with Quality of Service Constraints. Performance Evaluation and Enhancement of New Localised Routing Approaches to Provide Quality of Service for Computer and Communication Networks.

Mohammad, Abdulbaset H. T. January 2010 (has links)
The Quality of Service (QoS) is a profound concept which is gaining increasing attention in the Internet industry. Best-effort applications are now no longer acceptable in certain situations needing high bandwidth provisioning, low loss and streaming of multimedia applications. New emerging multimedia applications are requiring new levels of quality of services beyond those supported by best-effort networks. Quality of service routing is an essential part in any QoS architecture in communication networks. QoS routing aims to select a path among the many possible choices that has sufficient resources to accommodate the QoS requirements. QoS routing can significantly improve the network performance due to its awareness of the network QoS state. Most QoS routing algorithms require maintenance of the global network¿s state information to make routing decisions. Global state information needs to be periodically exchanged among routers since the efficiency of a routing algorithm depends on link-state information accuracy. However, most QoS routing algorithms suffer from scalability due to the high communication overhead and the high computation effort associated with maintaining accurate link state information and distributing global state information to each node in the network. The ultimate goal of this thesis is to contribute towards enhancing the scalability of QoS routing algorithms. Towards this goal, the thesis is focused on Localised QoS routing algorithms proposed to overcome the problems of using global network state information. Using such an approach, the source node makes routing decisions based on the local state information for each node in the path. Localised QoS routing algorithms avoid the problems associated in the global network state, like high communication and processing overheads. In Localised QoS routing algorithms each source node maintains a predetermined set of candidate paths for each destination and avoids the problems associated with the maintenance of a global network state by using locally collected flow statistics and flow blocking probabilities. / Libya's higher education
460

Performance Evaluation of the McMaster Incident Detection Algorithm

Lyall, Bradley Benjamin 04 1900 (has links)
The McMaster incident detection algorithm is being tested on-line within the Burlington freeway traffic management system (FTMS) as an alternative to the existing California-type algorithm currently in place. This paper represents the most recent and comprehensive evaluation of the McMaster algorithm's performance to date. In the past, the algorithm has been tested using single lane detectors for the northbound lanes only. This evaluation uses data from lanes 1 and 2 for each of the 13 northbound and 13 southbound detector stations. The data was collected during a 60-day period beginning on November 15, 1990 and ending January 13, 1991. Detection rate, mean time-lag to detection and false alarm rate are used to evaluate the performance of the algorithm. As well, those factors such as winter precipitation, which influenced the performance of the algorithm are also examined. To improve the algorithm's detection rate and lower its false alarm rate, it is reccomended that the persistence check used to declare an incident be increased by 30-seconds from 2 to 3 periods. / Thesis / Candidate in Philosophy

Page generated in 0.1368 seconds