• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 145
  • 53
  • 38
  • 32
  • 14
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 593
  • 134
  • 121
  • 116
  • 101
  • 80
  • 79
  • 77
  • 76
  • 73
  • 71
  • 70
  • 59
  • 53
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Interopérabilité de modèles dans le cycle de conception des systèmes électromagnétiques via des supports complémentaires : VHDL-AMS et composants logiciels ICAr / Interoperability of models in the design cycle of electromagnetic systems through complementary supports : VHDL-AMS language and ICAr software components

Rezgui, Abir 25 October 2012 (has links)
Cette thèse aborde les formalismes pour la modélisation multi-physique en support au cycle en V deconception. Ce travail a été réalisé dans le cadre du projet ANR–MoCoSyMec, selon la méthodologie duprototypage virtuel fonctionnel (PVF) et illustré sur des systèmes électromagnétiques.Nous nous sommes principalement intéressés au langage VHDL-AMS, en tant que support aux différentsniveaux de modélisation apparaissant dans le cycle en V de conception. Cela nous a conduits à traiter laportabilité et l’interopérabilité en VHDL-AMS de diverses méthodes et outils de modélisation. Nous avonsproposé et validé, via le formalisme des composants logiciels ICAr, des solutions aux limites de l’utilisation deVHDL-AMS pour modéliser certains phénomènes physiques reposants sur des calculs numériques.Nous avons étendu la norme ICAr pour supporter des modèles dynamiques décrits par des équationsdifférentielles algébriques (DAE) ; et pour des besoins de co-simulation, nous pouvons également y associer unsolveur. Ces développements sont désormais capitalisés dans le framework CADES.Enfin, nous avons proposé une architecture pour le portage de modèles d’un formalisme à un autre. Elle a étédéfinie et mise en oeuvre plus particulièrement pour des modèles magnétiques réluctants (Reluctool) et desMEMS magnétiques (MacMMems) vers le VHDL-AMS.Ces formalismes et méthodologies sont mis en oeuvre autour du PVF d’un contacteur électromagnétique. / This PhD report deals with modeling formalisms for multi-physical systems in the design V- cycle. Thiswork was carried out within the French ANR-MoCoSyMec project, according to the methodology of functionalvirtual prototyping (PVF) and illustrated with electromagnetical systems.The work focuses on the VHDL-AMS modeling language, as a support for several modeling levels appearingin the design V-cycle. In this work, the portability and interoperability problems have been studied, usingVHDL-AMS, for various modeling methods and tools. Solutions have been proposed and validated for use limitsof VHDL-AMS language, specifically for the modeling of some physical phenomena using numericalcomputations, through the software component formalism called ICAr.The ICAr software component standard has been extended to support dynamic models described throughdifferential algebraic equations (DAE). It has also been extended for co-simulation purposes in which a solver isassociated to the dynamic model inside the ICAr component. These developed solutions are now available in theframework CADES.Finally, architecture has been proposed for the transforming of models from a professional formalism intoanother, specifically into VHDL-AMS. It has been designed and implemented for reluctant magnetic models(RelucTool) and magnetic MEMS (MacMMems).These formalisms and methodologies are implemented around the functional virtual prototyping (PVF) of anelectromagnetic contactor.
582

Bezpečnost biometrických systémů / Security of Biometric Systems

Lodrová, Dana Unknown Date (has links)
Hlavním přínosem této práce jsou dva nové přístupy pro zvýšení bezpečnosti biometrických systémů založených na rozpoznávání podle otisků prstů. První přístup je z oblasti testování živosti a znemožňuje použití různých typů falešných otisků prstů a jiných metod oklamání senzoru v průběhu procesu snímání otisků. Tento patentovaný přístup je založen na změně barvy a šířky papilárních linií vlivem přitlačení prstu na skleněný podklad. Výsledná jednotka pro testování živosti může být integrována do optických senzorů.  Druhý přístup je z oblasti standardizace a zvyšuje bezpečnost a interoperabilitu procesů extrakce markantů a porovnání. Pro tyto účely jsem vytvořila metodologii, která stanovuje míry sémantické shody pro extraktory markantů otisků prstů. Markanty nalezené testovanými extraktory jsou porovnávány oproti Ground-Truth markantům získaným pomocí shlukování dat poskytnutých daktyloskopickými experty. Tato navrhovaná metodologie je zahrnuta v navrhovaném dodatku k normě ISO/IEC 29109-2 (Amd. 2 WD4).
583

Développement d'une méthodologie d'échange des métadonnées des objets numériques d'apprentissage pour une interopérabilité entre plates-formes d'elearning hétérogènes: cas de l'Université de Lubumbashi (R.D Congo) et ses partenaires belges / Development of a methodology of exchange of learning object metadata for interoperability between LMS, case of University of Lubumbashi and belgian partners.

Mwepu Fyama, Blaise 12 October 2011 (has links)
Dans cette thèse nous nous sommes intéressés principalement, dans le cadre de l’e-learning, à mettre en oeuvre une méthodologie qui a pour mission d’assurer une interopérabilité entre la plate-forme de l’Université de Lubumbashi et les plates-formes des universités partenaires belges afin de permettre un transfert des contenus d’apprentissage. Notre démarche n’a pas voulu s’arrêter à l’échange des contenus, nous sommes allés jusqu’à proposer des moyens de permettre un suivi dynamique des étudiants via les outils de communication./ In this thesis we were mainly interested, within the framework of the e-learning, to implement(operate) a methodology which has for mission to assure(insure) an interoperability between the platform of the University of Lubumbashi and the platforms of the Belgian partner universities to allow a transfer of the contents of learning(apprenticeship). Our approach(initiative) did not want to stop(arrest) in the exchange of the contents, we went as far as suggesting means allowing a dynamic follow-up of the students via communications tools.<p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
584

Proposing a New System Architecture for Next Generation Learning Environment

Aboualizadehbehbahani, Maziar January 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The emergence of information exchange and act of offering features through external interfaces is a vast but immensely valuable challenge, and essential elements of learning environments cannot be excluded. Nowadays, there are a lot of different service providers working in the learning systems market and each of them has their own advantages. On that premise, in today's world even large learning management systems are trying to cooperate with each other in order to be best. For instance, Instructure is a substantial company and can easily employ a dedicated team tasked with the development of a video conferencing functionality, but it chooses to use an open source alternative instead: The BigBlueButton. Unfortunately, different learning system manufacturers are using different technologies for various reasons, making integration that much harder. Standards in learning environments have come to resolve problems regarding exchanging information, providing and consuming functionalities externally and simultaneously minimizing the amount of effort needed to integrate systems. In addition to defining and simplifying these standards, careful consideration is essential when designing new, comprehensive and useful systems, as well as adding interoperability to existing systems, all which subsequently took part in this research. In this research I have reviewed most of the standards and protocols for integration in learning environments and proposed a revised approach for app stores in learning environments. Finally, as a case study, a learning tool has been developed to avail essential functionalities of a social educational learning management system integrated with other learning management systems. This tool supports the dominant and most popular standards for interoperability and can be added to learning management systems within seconds.
585

IMPLEMENTING NETCONF AND YANG ON CUSTOM EMBEDDED SYSTEMS

Georges, Krister, Jahnstedt, Per January 2023 (has links)
Simple Network Management Protocol (SNMP) has been the traditional approach for configuring and monitoring network devices, but its limitations in security and automation have driven the exploration of alternative solutions. The Network Configuration Protocol (NETCONF) and Yet Another Next Generation (YANG) data modeling language significantly improve security and automation capabilities. This thesis aims to investigate the feasibility of implementing a NETCONF server on the Anybus CompactCom (ABCC) Industrial Internet of Things (IIoT) Security module, an embedded device with limited processing power and memory, running on a custom operating system, and using open source projects with MbedTLS as the cryptographic primitive library. The project will assess implementing a YANG model to describe the ABCC’s configurable interface, connecting with a NETCONF client to exchange capabilities, monitoring specific attributes or interfaces on the device, and invoking remote procedure call (RPC) commands to configure the ABCC settings. The goal is to provide a proof of concept and contribute to the growing trend of adopting NETCONF and YANG in the industry, particularly for the Industrial Internet of Things (IIoT) platform of Hardware Meets Software (HMS).
586

Interoperability of Traffic Infrastructure Planning and Geospatial Information Systems

Nejatbakhsh Esfahani, Nazereh 01 October 2018 (has links)
Building Information Modelling (BIM) as a Model-based design facilitates to investigate multiple solutions in the infrastructure planning process. The most important reason for implementing model-based design is to help designers and to increase communication between different design parties. It decentralizes and coordinates team collaboration and facilitates faster and lossless project data exchange and management across extended teams and external partners in project lifecycle. Infrastructure are fundamental facilities, services, and installations needed for the functioning of a community or society, such as transportation, roads, communication systems, water and power networks, as well as power plants. Geospatial Information Systems (GIS) as the digital representation of the world are systems for maintaining, managing, modelling, analyzing, and visualizing of the world data including infrastructure. High level infrastructure suits mostly facilitate to analyze the infrastructure design based on the international or user defined standards. Called regulation1-based design, this minimizes errors, reduces costly design conflicts, increases time savings and provides consistent project quality, yet mostly in standalone solutions. Tasks of infrastructure usually require both model based and regulation based design packages. Infrastructure tasks deal with cross-domain information. However, the corresponding data is split in several domain models. Besides infrastructure projects demand a lot of decision makings on governmental as well as on private level considering different data models. Therefore lossless flow of project data as well as documents like regulations across project team, stakeholders, governmental and private level is highly important. Yet infrastructure projects have largely been absent from product modelling discourses for a long time. Thus, as will be explained in chapter 2 interoperability is needed in infrastructure processes. Multimodel (MM) is one of the interoperability methods which enable heterogeneous data models from various domains get bundled together into a container keeping their original format. Existing interoperability methods including existing MM solutions can’t satisfactorily fulfill the typical demands of infrastructure information processes like dynamic data resources and a huge amount of inter model relations. Therefore chapter 3 concept of infrastructure information modelling investigates a method for loose and rule based coupling of exchangeable heterogeneous information spaces. This hypothesis is an extension for the existing MM to a rule-based Multimodel named extended Multimodel (eMM) with semantic rules – instead of static links. The semantic rules will be used to describe relations between data elements of various models dynamically in a link-database. Most of the confusion about geospatial data models arises from their diversity. In some of these data models spatial IDs are the basic identities of entities and in some other data models there are no IDs. That is why in the geospatial data, data structure is more important than data models. There are always spatial indexes that enable accessing to the geodata. The most important unification of data models involved in infrastructure projects is the spatiality. Explained in chapter 4 the method of infrastructure information modelling for interoperation in spatial domains generate interlinks through spatial identity of entities. Match finding through spatial links enables any kind of data models sharing spatial property get interlinked. Through such spatial links each entity receives the spatial information from other data models which is related to the target entity due to sharing equivalent spatial index. This information will be the virtual properties for the object. The thesis uses Nearest Neighborhood algorithm for spatial match finding and performs filtering and refining approaches. For the abstraction of the spatial matching results hierarchical filtering techniques are used for refining the virtual properties. These approaches focus on two main application areas which are product model and Level of Detail (LoD). For the eMM suggested in this thesis a rule based interoperability method between arbitrary data models of spatial domain has been developed. The implementation of this method enables transaction of data in spatial domains run loss less. The system architecture and the implementation which has been applied on the case study of this thesis namely infrastructure and geospatial data models are described in chapter 5. Achieving afore mentioned aims results in reducing the whole project lifecycle costs, increasing reliability of the comprehensive fundamental information, and consequently in independent, cost-effective, aesthetically pleasing, and environmentally sensitive infrastructure design.:ABSTRACT 4 KEYWORDS 7 TABLE OF CONTENT 8 LIST OF FIGURES 9 LIST OF TABLES 11 LIST OF ABBREVIATION 12 INTRODUCTION 13 1.1. A GENERAL VIEW 14 1.2. PROBLEM STATEMENT 15 1.3. OBJECTIVES 17 1.4. APPROACH 18 1.5. STRUCTURE OF THESIS 18 INTEROPERABILITY IN INFRASTRUCTURE ENGINEERING 20 2.1. STATE OF INTEROPERABILITY 21 2.1.1. Interoperability of GIS and BIM 23 2.1.2. Interoperability of GIS and Infrastructure 25 2.2. MAIN CHALLENGES AND RELATED WORK 27 2.3. INFRASTRUCTURE MODELING IN GEOSPATIAL CONTEXT 29 2.3.1. LamdXML: Infrastructure Data Standards 32 2.3.2. CityGML: Geospatial Data Standards 33 2.3.3. LandXML and CityGML 36 2.4. INTEROPERABILITY AND MULTIMODEL TECHNOLOGY 39 2.5. LIMITATIONS OF EXISTING APPROACHES 41 INFRASTRUCTURE INFORMATION MODELLING 44 3.1. MULTI MODEL FOR GEOSPATIAL AND INFRASTRUCTURE DATA MODELS 45 3.2. LINKING APPROACH, QUERYING AND FILTERING 48 3.2.1. Virtual Properties via Link Model 49 3.3. MULTI MODEL AS AN INTERDISCIPLINARY METHOD 52 3.4. USING LEVEL OF DETAIL (LOD) FOR FILTERING 53 SPATIAL MODELLING AND PROCESSING 58 4.1. SPATIAL IDENTIFIERS 59 4.1.1. Spatial Indexes 60 4.1.2. Tree-Based Spatial Indexes 61 4.2. NEAREST NEIGHBORHOOD AS A BASIC LINK METHOD 63 4.3. HIERARCHICAL FILTERING 70 4.4. OTHER FUNCTIONAL LINK METHODS 75 4.5. ADVANCES AND LIMITATIONS OF FUNCTIONAL LINK METHODS 76 IMPLEMENTATION OF THE PROPOSED IIM METHOD 77 5.1. IMPLEMENTATION 78 5.2. CASE STUDY 83 CONCLUSION 89 6.1. SUMMERY 90 6.2. DISCUSSION OF RESULTS 92 6.3. FUTURE WORK 93 BIBLIOGRAPHY 94 7.1. BOOKS AND PAPERS 95 7.2. WEBSITES 101
587

A Framework for Interoperability on the United States Electric Grid Infrastructure

Laval, Stuart 01 January 2015 (has links)
Historically, the United States (US) electric grid has been a stable one-way power delivery infrastructure that supplies centrally-generated electricity to its predictably consuming demand. However, the US electric grid is now undergoing a huge transformation from a simple and static system to a complex and dynamic network, which is starting to interconnect intermittent distributed energy resources (DERs), portable electric vehicles (EVs), and load-altering home automation devices, that create bidirectional power flow or stochastic load behavior. In order for this grid of the future to effectively embrace the high penetration of these disruptive and fast-responding digital technologies without compromising its safety, reliability, and affordability, plug-and-play interoperability within the field area network must be enabled between operational technology (OT), information technology (IT), and telecommunication assets in order to seamlessly and securely integrate into the electric utility's operations and planning systems in a modular, flexible, and scalable fashion. This research proposes a potential approach to simplifying the translation and contextualization of operational data on the electric grid without being routed to the utility datacenter for a control decision. This methodology integrates modern software technology from other industries, along with utility industry-standard semantic models, to overcome information siloes and enable interoperability. By leveraging industrial engineering tools, a framework is also developed to help devise a reference architecture and use-case application process that is applied and validated at a US electric utility.
588

Ranking System for IoT Industry Platform

Mukherjee, Somshree January 2016 (has links)
The Internet of Things (IoT) has seen a huge growth spurt in the last few years which has resulted in the need for more standardised IoT technology. Because of this, numerous IoT platforms have sprung up that offer a variety of features and use different technologies which may not necessarily be compliant with each other or with other technologies. Companies that wish to enter theIoT market are in constant need to find the most suitable IoT platform for their business and have a certain set of requirements that need to be fulfilled by the IoT platforms in order for the application to be fully functional. The problem that this thesis project is trying to address is a standardised procedure for selecting the IoT platforms. The project aims to suggest a list of requirements derived from the available IoT architecture models, that must be followed by IoT applications in general, and a subset of these requirements may be specified by the companies as essentials for their application. This thesis project also aims at development of a Web platform to automate this process, by listing the requirements on this website and allowing companies to input their choices,and accordingly show them the list of IoT platforms that comply with their input requirements. A simple Weighted Sum Model is used to rank the search result in order to prioritise the IoT platforms in order of the features that they provide. This thesis project also infers the best IoT architectural model available based on a comparative study of three major IoT architectures with respect to the requirements proposed. Hence the project concludes that this Web platform will ease the process of searching for the right IoT platform andthe companies can therefore make an informed decision about the kind of IoT platform that they should use, thereby reducing their time spent on market research and hence their time-to-market.
589

Enrichment of Archetypes with Domain Knowledge to Enhance the Consistency of Electronic Health Records

Giménez Solano, Vicente Miguel 21 January 2022 (has links)
[ES] La consistencia de los datos de la HCE, como dimensión de la calidad, se considera un requisito esencial para la mejora de la prestación de la asistencia sanitaria, los procesos de toma de decisiones clínicas y la promoción de la investigación clínica. En este contexto, la cooperación entre la información y los modelos de dominio se considera esencial en la literatura, pero la comunidad científica no la ha abordado adecuadamente hasta la fecha. La contribución principal de esta tesis es el desarrollo de métodos y herramientas para la inclusión de expresiones de enlaces terminológicos en reglas de consistencia. Las contribuciones específicas son: - Definición de un método para ejecutar ECs sobre una base de datos de SNOMED CT orientada a grafos. - Definición de métodos para simplificar ECs antes y después de su ejecución, y su validación semántica conforme al Machine Readable Concept Model de SNOMED CT (MRCM). - Definición de un método para visualizar, explorar dinámicamente, comprender y validar subconjuntos de SNOMED CT. - Desarrollo de SNQuery, una plataforma que ejecuta, simplifica y valida ECs y visualiza los subconjuntos resultantes. - Definición de EHRules, un lenguaje de expresiones basado en el openEHR Expression Language para la especificación de reglas de consistencia en arquetipos, incluido el enlace terminológico de contenido, con el fin de enriquecer los arquetipos con conocimiento del dominio. - Definición de un método para ejecutar las expresiones de EHRules con el fin de validar la consistencia de los datos de la HCE mediante la ejecución de dichas expresiones sobre instancias de datos de pacientes. Nuestro objetivo es que estas contribuciones ayuden a mejorar la calidad de la HCE, ya que proporcionan métodos y herramientas para la validación y mejora de la consistencia de los datos de la HCE. Pretendemos, además, mediante la definición de enlaces de contenido entre modelos de información y terminologías clínicas, elevar el nivel de interoperabilidad semántica, para lo cual la definición de enlaces terminológicos es crucial. / [CA] La consistència de les dades de la HCE, com a dimensió de la qualitat, es considera un requisit essencial per a la millora de la prestació de l'assistència sanitària, els processos de presa de decisions clíniques i la promoció de la investigació clínica. En aquest context, la cooperació entre la informació i els models de domini es considera essencial en la literatura, però la comunitat científica no l'ha abordada adequadament fins hui. La contribució principal d'aquesta tesi és el desenvolupament de mètodes i ferramentes per a la inclusió d'expressions d'enllaços terminològics en regles de consistència. Les contribucions específiques són: - Definició d'un mètode per a executar ECs sobre una base de dades de SNOMED CT orientada a grafs. - Definició de mètodes per a simplificar ECs abans i després de la seua execució, i la seua validació semàntica conforme al Machine Readable Concept Model de SNOMED CT (MRCM). - Definició d'un mètode per a visualitzar, explorar dinàmicament, comprendre i validar subconjunts de SNOMED CT. - Desenvolupament de SNQuery, una plataforma que executa, simplifica i valida ECs i visualitza els subconjunts resultants. - Definició de EHRules, un llenguatge d'expressions basat en l'openEHR Expression Language per a l'especificació de regles de consistència en arquetips, inclòs l'enllaç terminològic de contingut, amb la finalitat d'enriquir els arquetips amb coneixement del domini. - Definició d'un mètode per a executar les expressions de EHRules amb la finalitat de validar la consistència de les dades de la HCE mitjançant l'execució d'aquestes expressions sobre instàncies de dades de pacients. El nostre objectiu és que aquestes contribucions ajuden a millorar la qualitat de la HCE, ja que proporcionen mètodes i ferramentes per a la validació i millora de la consistència de les dades de la HCE. Pretenem, a més, mitjançant la definició d'enllaços de contingut entre models d'informació i terminologies clíniques, elevar el nivell d'interoperabilitat semàntica, per a la qual cosa la definició d'enllaços terminològics és crucial. / [EN] Consistency of EHR data, as a dimension of quality, is considered an essential requirement to the improvement of healthcare delivery, clinical decision-making processes, and the promotion of clinical research. In this context, cooperation between information and domain models has been considered essential in the literature, but it has not been adequately addressed by the scientific community to date. The main contribution of this thesis is the development of methods and tools for the inclusion of terminology binding expressions in consistency rules. Specific contributions are: - Definition of a method to execute ECs over a SNOMED CT graph-oriented database. - Definition of methods to simplify ECs before and after its execution and semantic validation according to the SNOMED CT Machine Readable Concept Model (MRCM). - Definition of a method to visualize, dynamically explore, understand and validate SNOMED CT subsets. - Development of SNQuery, an execution platform that executes, simplifies and validates ECs, and visualizes the resulting subsets. - Definition of EHRules, an expression language based on the openEHR Expression Language for the specification of consistency expressions in archetypes, including value set bindings, in order to enrich archetypes with domain knowledge. - Definition of a method to execute EHRules expressions in order to validate the consistency of EHR data by executing such rules over patient data instances. Our objective is that these contributions help to enhance the quality of EHR, as they provide methods and tools for the validation and enhancement of the EHR data consistency. We also intend, by defining value set bindings between information models and clinical terminologies, to raise the level of semantic interoperability, for which the definition of terminological bindings is crucial. / This thesis was partially funded by Ministerio de Economía y Competitividad, “Doctorados Industriales”, grant DIN2018-009951, and by Universitat Politècnica de València, “Formación de Personal Investigador” (FPI-UPV). / Giménez Solano, VM. (2021). Enrichment of Archetypes with Domain Knowledge to Enhance the Consistency of Electronic Health Records [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/180082
590

Enshittification av sociala medier : En studie i digitala fotbojor / Enshittification of social media : A study in the shackles of the digital age

Johansson, Carl-Johan, Kovacevic Gahne, Franco January 2024 (has links)
Majoriteten av jordens befolkning har profiler på något av den handfull av sociala nätverk som dominerar Internet. Samtidigt som dessa tjänster växer i dominans och användarantal upplevs det ofta att användarupplevelsen blir allt sämre. Kritik har bland annat riktats mot hur de låter desinformation spridas och hur de monetäriserar användardata för att sälja målstyrd reklam. Enshittification är ett fenomen som definierades av techjournalisten och -kritikern Cory Doctorow för att beskriva hur dessa plattformar aktivt gör upplevelsen sämre för användarna. Enshittification gör gällande att användare stannar på plattformar som exploaterar dem på grund av höga omställningskostnader, såväl ekonomiska som sociala, trots en uppenbar försämring av den digitala miljön. Forskningsarbetet som presenteras här är en teoriutvecklande kritisk studie av enshittification och hur det manifesteras inom sociala medieplattformar. Studiens syfte är att grunda fenomenet och etablera en dialog kring enshittification i en IS-kontext. Den erbjuder insikter i enshittifications underliggande orsaker och dess konsekvenser för användarna, men även i hur man kan motverka fenomenet. Studien argumenterar även för att kritisk teori behövs inom IS för att kunna analysera sådana här fenomen och relaterade sociala aspekter inom informationsteknik. / A majority of the world’s population today resides on social media. At the same time a small group of platforms dominate the social media landscape. While these services have experienced great growth both in terms of registered users and market dominance, they’ve also been heavily criticized for the way the user experience seems to have deteriorated over time, particularly in respect to how disinformation is spreading throughout the networks and the way these services monetize their users’ personal data. Enshittification is a phenomenon, coined by the tech journalist and -critic Cory Doctorow, that describes the way these platforms actively work to make the user experience worse. The phenomenon asserts that people will keep using services that exploit them due to high switching costs—of either personal or economic nature, or both—even though the user experience deteriorates. This study offers a grounding theory of enshittification as a phenomenon, along with a critical perspective of its manifestation in social networks. Its purpose is to create a definition of the phenomenon and to establish a dialogue within the research field of information systems. The study also offers greater insight into the underpinnings of enshittification and its consequences for the end users, along with a critical reflection over possible mitigation strategies. It also argues that critical theory is needed in the field of IS research in order to be able to analyze phenomenons like enshittification and similar social aspects that manifest themselves within information technology.

Page generated in 0.0412 seconds