• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6857
  • 2288
  • 1
  • 1
  • Tagged with
  • 9147
  • 9121
  • 8130
  • 8071
  • 1264
  • 925
  • 898
  • 703
  • 668
  • 661
  • 626
  • 552
  • 460
  • 426
  • 360
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Churn Analysis in a Music Streaming Service : Predicting and understanding retention

Chaliane Junior, Guilherme Dinis January 2017 (has links)
Churn analysis can be understood as a problem of predicting and understanding abandonment of use of a product or service. Different industries ranging from entertainment to financial investment, and cloud providers make use of digital platforms where their users access their product offerings. Usage often leads to behavioural trails being left behind. These trails can then be mined to understand them better, improve the product or service, and to predict churn. In this thesis, we perform churn analysis on a reallife data set from a music streaming service, Spotify AB, with different signals, ranging from activity, to financial, temporal, and performance indicators. We compare logistic regression, random forest, along with neural networks for the task of churn prediction, and in addition to that, a fourth approach combining random forests with neural networks is proposed and evaluated. Then, a meta-heuristic technique is applied over the data set to extract Association Rules that describe quantified relationships between predictors and churn. We relate these findings to observed patterns in aggregate level data, finding probable explanations to how specific product features and user behaviours lead to churn or activation. For churn prediction, we found that all three non-linear methods performed better than logistic regression, suggesting the limitation of linear models for our use case. Our proposed enhanced random forest model performed mildly better than conventional random forest. / Churn analys kan förstås som ett tillvägagångssätt för att prediktera och förstå avslutad användning av en produkt eller tjänst. Olika industrier, som kan sträcka sig från underhållning till finansiell investering och molntjänsteleverantörer, använder digitala plattformar där deras användare har tillgång till deras produkter. Användning leder ofta till efterlämnande av beteendemönster. Dessa beteendemönster kan därefter utvinnas för att bättre förstå användarna, förbättra produkterna eller tjänsterna och för att prediktera churn. I detta arbete utför vi churn analys på ett dataset från en musikstreamingtjänst, Spotify AB, med olika signaler, som sträcker sig från aktivitet, till finansiella och temporala samt indikationer på prestanda. Vi jämför logistisk regression, random forest och neurala nätverk med uppgiften att utföra churn prediktering. Ytterligare ett tillvägagångssätt som kombinerar random forests med med neurala nätverk föreslås och utvärderas. Sedan, för att ta fram regler som är begripliga för beslutstagare, används en metaheuristisk teknik för datasetet, som beskriver kvantifierade relationer mellan prediktorer och churn. Vi sätter resultaten i relation till observerade mönster hos aggregerad data, vilket gör att vi hittar troliga förklaringar till hur specifika karaktärer hos produkten och användarmönster leder till churn. För prediktering av churn gav samtliga icke-linjära metoder bättre prestanda än logistisk regression, vilket tyder på begränsningarna hos linjära modeller för vårt användningsfall, och vår föreslagna förbättrade random forest modell hade svagt bättre prestanda än den konventionella random forest.
82

Utveckling av tjänster för ett mobilt medium

Tyseng, Daniel January 2004 (has links)
<p>The mobile revolution is here to stay, and today most people have (at least) one cell phone at home. Some even a PDA (Personal Digital Assistant)as a complement for the simpler tasks at the office, e.g sending e-mail, word processing, browsing the Internet etc. Many business have also given the employees the opportunity to do some of the business tasks inbetweeen customer visits with the help from a cell phone or PDA.</p><p>Despite all this, the so called Internet connected mobile revolution is often mentioned as a big failure. Maybe because the people in general tended not to use the new technology as much as the companies predicted. This could be because they still haven't learned how it works, or just don't seem to have the need for services provided.</p><p>After a couple of failures in the development process, almost every cell phone has support for mobile Internet today. Instead the responsibility lies on the content providers and cell phone operators. The users are going to need killer applications, e.g. services as e-mail, chat, time tables, services they feel they can't be without. How is a service like this created then? And what kind of obstacles lies in the way of making this service available for everyone that uses a cell phone - not only the technical ones? What kind of responsibility do the operators have?</p><p>To change from the Internet based platform to the mobile platform could contain some difficult tasks. Even if some technical issues occured during the project, the main challenge is not the technical one. In fact, the technical issues should not be a problem for companies already working with Internet technology.</p>
83

Reverse Engineering of Legacy Real-Time Systems : An Automated Approach Based on Execution-Time Recording

Huselius, Joel January 2007 (has links)
<p>Many real-time systems have significant value in terms of legacy, since large efforts have been spent over many years to ensure their proper functionality. Examples can be found in, e.g., telecom and automation-industries. Maintenance consumes the major part of the budget for these systems. As each system is part of a dynamically changing larger whole, maintenance is required to modify the system to adapt to these changes. However, due to system complexity, engineers cannot be assumed to understand the system in every aspect, making the full range of effects of modifications on the system difficult to predict. Effect prediction would be useful, for instance in early discovery of unsuitable modifications. Accurate models would be useful for such prediction, but are generally non-existent.</p><p>With the introduction of a method for automated modeling, this thesis applies an industrial perspective to the problem of obtaining models of legacy real-time systems. The method generates a model of the system as it behaved during the executions. The recordings cover system level events such as context switches and communication, and may optionally cover data manipulations on task level, which allows modeling of causal relations. As means of abstraction, the models can contain probabilistic selections and execution time requirements. The method also includes automatic validation of the generated model, in which the model is compared to the system behavior. Our method has been implemented and has been evaluated in both an industrial case-study and in a controlled experiment. For the controlled experiment, we have developed a framework for automatic evaluation of (automated) modeling methods.</p><p>Using the models generated with our method, engineers can prototype designs of modifications, which allows for early rejection of unfeasible designs. The earlier such rejection is performed, the more time and resources are freed for other activities.</p>
84

Flexible Scheduling for Media Processing in Resource Constrained Real-Time Systems

Isovic, Damir January 2004 (has links)
<p>The MPEG-2 standard for video coding is predominant in consumer electronics for DVD players, digital satellite receivers, and TVs today. MPEG-2 processing puts high demands on audio/video quality, which is achieved by continuous and synchronized playout without interrupts. At the same time, there are restrictions on the storage media, e.g.., limited size of a DVD disc, communication media, e.g., limited bandwidth of the Internet, display devices, e.g., the processing power, memory and battery life of pocket PCs or video mobile phones, and finally the users, i.e., humans ability of perceiving motion. If the available resources are not sufficient to process a full-size MPEG-2 video, then video stream adaptation must take place. However, this should be done carefully, since in high quality devices, drops in perceived video quality are not tolerated by consumers.</p><p>We propose real-time methods for resource reservation of MPEG-2video stream processing and introduce flexible scheduling mechanisms for video decoding. Our method is a mixed offline and online approach for scheduling of periodic, aperiodic and sporadic tasks, based on slot shifting. We use the offline part of slot shifting to eliminate all types of complex task constraints before the runtime of the system. Then, we propose an online guarantee algorithm for dealing with dynamically arriving tasks. Aperiodic and sporadic tasks are incorporated into the offline schedule by making use of the unused resources and leeways in the schedule. Sporadic tasks are guaranteed offline for the worst-case arrival patterns and scheduled online, where an online algorithm keeps track of arrivals of instances of sporadic tasks to reduce pessimism about future sporadic arrivals and improve response times and acceptance of firm aperiodic tasks. At runtime, our mechanism ensures feasible execution of tasks with complex constraints in the presence of additional tasks or overloads.</p><p>We use the scheduling and resource reservation mechanism above to flexibly process MPEG-2 video streams. First, we present results from a study of realistic MPEG-2 video streams to analyze the validity of common assumptions for software decoding and identify a number of misconceptions. Then, we identify constraints imposed by frame buffer handling and discuss their implications on the decoding architecture and timing. Furthermore, we propose realistic timing constraints demanded by high quality MPEG-2 software video decoding. Based on these, we present a MPEG-2 video frame selection algorithm with focus on high video quality perceived by the users, which fully utilize limited resources. Given that not all frames in a stream can be processed, it selects those which will provide the best picture quality while matching the available resources, starting only such decoding, which is guaranteed to be completed. As a final result, we provide a real-time method for flexible scheduling of media processing in resource constrained system. Results from study based on realistic MPEG-2 video underline the effectiveness of our approach.</p>
85

Evaluation of OKL4 / Virtualisering med OKL4

Bylund, Mathias January 2009 (has links)
<p>Virtualization is not a new concept in computer science. It has been used since the middle of the sixties and now software companies has interested in this technology. Virtualization is used in server side to maximize the capacity and reduce power consumption. This thesis focuses on virtualization in embedded system. The technology uses a hypervisor or a virtual machine monitor as a software layer that provide the virtual machine and to isolate the underlying hardware. One of most interesting issue is that is supports several operating system and applications running on the same hardware platform and the hypervisor has complete control of system resources. The company Open Kernel Labs is one of the leading providers of embedded systems software virtualization technology and OKL4 is one of theirproducts, which is based on L4 family of second-generation microkernel’s. In this thesis, we will evaluate the kernel contains, the performance, the security and the environment of the OKL4. Finally we conclude the advantages and disadvantages of the product and technology.</p>
86

New Strategies for Ensuring Time and Value Correctness in Dependable Real-Time Systems

Aysan, Hüseyin January 2009 (has links)
<p>Dependable real-time embedded systems are typically composed of a number of heterogeneous computing nodes, heterogeneous networks that connect them and tasks with multiple criticality levels allocated to the nodes. The heterogeneous nature of the hardware, results in a varying vulnerability to different types of hardware failures. For example, a computing node with effective shielding shows higher resistance to transient failures caused by environmental conditions such as radiation or temperature changes than an unshielded node. Similarly, resistance to permanent failures can vary depending on the manufacturing procedures used. Vulnerability to different types of errors of a task which may lead to a system failure, depends on several factors, such as the hardware on which the task runs and communicates, the software architecture and the implementation quality of the software, and varies from task to task. This variance, as well as the different criticality levels and real-time requirements of tasks, necessitate novel fault-tolerance approaches to be developed and used, in order to meet the stringent dependability requirements of resource-constrained real-time systems.</p><p> </p><p>In this thesis, the major contribution is four-fold. Firstly, we describe an error classification for real-time embedded systems and address error propagation aspects. The goal of this work is to perform the analysis on a given system, in order to find bottlenecks in satisfying dependability requirements and to provide guidelines on the usage of appropriate error detection and fault tolerance mechanisms.</p><p> </p><p>Secondly, we present a time-redundancy approach to provide a priori guarantees in fixed-priority scheduling (FPS) such that the system will be able to tolerate one value error per every critical task instance by re-execution of every critical task instance or execution of alternate tasks before deadlines, while keeping the associated costs minimized.</p><p> </p><p>Our third contribution is a new approach, Voting on Time and Value (VTV) which extends the N-modular redundancy approach by explicitly considering both value and timing errors, such that correct value is produced at a correct time, under specified assumptions. We illustrate our voting approach by instantiating it in the context of the well-known triple modular redundancy (TMR) approach. Further, we present a generalized voting algorithm targeting NMR that enables a high degree of customization from the user perspective.</p><p> </p><p>Finally, we propose a novel cascading redundancy approach within a generic fault tolerant scheduling framework. The proposed approach is capable of tolerating errors with a wider coverage (with respect to error frequency and error types) than our proposed time and space redundancy approaches in isolation, allows tasks with mixed criticality levels, is independent of the scheduling technique and, above all, ensures that every critical task instance can be feasibly replicated in both time and/or space. The fault-tolerance techniques presented in this thesis address various different error scenarios that can be observed in real-time embedded systems with respect to the types of errors and frequency of occurrence, and can be used to achieve the ultra-high levels of dependability which is required in many critical systems.</p> / PROGRESS
87

Predicting Quality Attributes in Component-based Software Systems

Larsson, Magnus January 2004 (has links)
No description available.
88

Operational Semantics for PLEX : A Basis for Safe Parallelization

Lindhult, Johan January 2008 (has links)
<p>The emerge of multi-core computers implies a major challenge for existing software. Due to simpler cores, the applications will face decreased performance if not executed in parallel. The problem is that much of the software is sequential.</p><p>Central parts of the AXE telephone exchange system from Ericsson is programmed in the language PLEX. The current software is executed on a single-processor architecture, and assumes non-preemptive execution.</p><p>This thesis presents two versions of an operational semantics for PLEX; one that models execution on the current, single-processor, architecture, and one that models execution on an assumed shared-memory architecture. A formal semantics of the language is a necessity for ensuring correctness of program analysis, and program transformations.</p><p>We also report on a case study of the potential memory conflicts that may arise when the existing code is allowed to be executed in parallel. We show that simple static methods are sufficient to resolve many of the potential conflicts, thereby reducing the amount of manual work that probably still needs to be performed in order to adapt the code for parallel processing.</p>
89

Data Management in Vehicle Control-Systems

Nyström, Dag January 2005 (has links)
<p>As the complexity of vehicle control-systems increases, the amount of information that these systems are intended to handle also increases. This thesis provides concepts relating to real-time database management systems to be used in such control-systems. By integrating a real-time database management system into a vehicle control-system, data management on a higher level of abstraction can be achieved. Current database management concepts are not sufficient for use in vehicles, and new concepts are necessary. A case-study at Volvo Construction Equipment Components AB in Eskilstuna, Sweden presented in this thesis, together with a survey of existing database platforms confirms this. The thesis specifically addresses data access issues by introducing; (i) a data access method, denoted database pointers, which enables data in a real-time database management system to be accessed efficiently. Database pointers, which resemble regular pointers variables, permit individual data elements in the database to be directly pointed out, without risking a violation of the database integrity. (ii) two concurrency-control algorithms, denoted 2V-DBP and 2V-DBP-SNAP which enable critical (hard real-time) and non-critical (soft real-time) data accesses to co-exist, without blocking of the hard real-time data accesses or risking unnecessary abortions of soft real-time data accesses. The thesis shows that 2V-DBP significantly outperforms a standard real-time concurrency control algorithm both with respect to lower response-times and minimized abortions. (iii) two concepts, denoted substitution and subscription queries that enable service- and diagnostics-tools to stimulate and monitor a control-system during run-time. The concepts presented in this thesis form a basis on which a data management concept suitable for embedded real-time systems, such as vehicle control-systems, can be built. </p> / <p>Ett modernt fordon är idag i princip helt styrt av inbyggda datorer. I takt med att funktionaliteten i fordonen ökar, blir programvaran i dessa datorer mer och mer komplex. Komplex programvara är svår och kostsam att konstruera. För att hantera denna komplexitet och underlätta konstruktion, satsar nu industrin på att finna metoder för att konstruera dessa system på en högre abstraktionsnivå. Dessa metoder syftar till att strukturera programvaran idess olika funktionella beståndsdelar, till exempel genom att använda så kallad komponentbaserad programvaruutveckling. Men, dessa metoder är inte effektiva vad gäller att hantera den ökande mängden information som följer med den ökande funktionaliteten i systemen. Exempel på information som skall hanteras är data från sensorer utspridda i bilen (temperaturer, tryck, varvtal osv.), styrdata från föraren (t.ex. rattutslag och gaspådrag), parameterdata, och loggdata som används för servicediagnostik. Denna information kan klassas som säkerhetskritisk eftersom den används för att styra beteendet av fordonet. På senare tid har dock mängden icke säkerhetskritisk information ökat, exempelvis i bekvämlighetssystem som multimedia-, navigations- och passagerarergonomisystem.</p><p>Denna avhandling syftar till att visa hur ett datahanteringssystem för inbyggda system, till exempel fordonssystem, kan konstrueras. Genom att använda ett realtidsdatabashanteringssystem för att lyfta upp datahanteringen på en högre abstraktionsnivå kan fordonssystem tillåtas att hantera stora mängder information på ett mycket enklare sätt än i nuvarande system. Ett sådant datahanteringssystem ger systemarkitekterna möjlighet att strukturera och modellera informationen på ett logiskt och överblickbart sätt. Informationen kan sedan läsas och uppdateras genom standardiserade gränssnitt anpassade förolika typer av funktionalitet. Avhandlingen behandlar specifikt problemet hur information i databasen, med hjälp av en concurrency-control algoritm, skall kunna delas av både säkerhetskritiska och icke säkerhetskritiska systemfunktioner i fordonet. Vidare avhandlas hur information kan distribueras både mellan olika datorsystem i fordonet, men också till diagnostik- och serviceverktyg som kan kopplas in i fordonet.</p>
90

A wishbone compatible SD card mass storage controller for embedded usage

Edvardsson, Adam January 2009 (has links)
<p>The purpose with this thesis was to develop an open source SD card controller IP core for usage insmall embedded system, emphasis has been laid on using as few logic gates as possible, providingan easy user interface and making it viable as a system disk controller.</p><p>For the most part, the lack of a complete open SD specification has mainly affected embeddedsystems, since desktop users generally read SD cards via USB-based card readers. But recentopenings of the SD specification have made it possible to develop SD-card readers which areutilizing the SD bus protocol.</p><p>Implementation has been done in Verilog for the hardware parts, and the software was developedin C. The proposed design has features common in disk controllers, like direct memory access,interrupt generation, and error control.</p><p>The design uses approximately 4000 core cells and 2 RAM blocks, about 50% less logic then acommercial alternative (Eureka EP560 ).</p><p>Also a second smaller core was developed by makingfew modifications of the full design, thereby showing the strength of a freely modifiable open IPcore.</p>

Page generated in 0.0808 seconds