• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1261
  • 687
  • 2
  • Tagged with
  • 1952
  • 1948
  • 1946
  • 203
  • 200
  • 188
  • 174
  • 156
  • 141
  • 139
  • 122
  • 111
  • 94
  • 93
  • 83
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

A Comparison on Image, Numerical and Hybrid based Deep Learning for Computer-aided AD Diagnostics / En jämförelse av bild, numerisk och hybrid baserad djupinlärning för datorassisterad AD diagnostik

Buvari, Sebastian, Pettersson, Kalle January 2020 (has links)
Alzheimer’s disease (AD) is the most common form of dementia making up 60- 70% of the 50 million active cases worldwide and is a degenerative disease which causes irreversible damage to the parts of the brain associated with the ability of thinking and memorizing. A lot of time and effort has been put towards diagnosing and detecting AD in its early stages and a field showing great promise in aiding with early stage detection is deep learning. The main issues with deep learning in the field of AD detection is the lack of relatively big datasets that are typically needed in order to train an accurate deep learning network. This paper aims to examine whether combining both image based and numerical data from MRI scans can increase the accuracy of the network. Three different deep learning neural network models were constructed with the TensorFlow framework to be used as AD classifiers using numerical, image and hybrid based input data gathered from the OASIS-3 dataset. The results of the study showed that the hybrid model had a slight increase in accuracy compared to the image and numerical based models. The report concluded that a hybrid based AD classifier shows promising results to being a more accurate and stable model but the results were not conclusive enough to give a definitive answer. / Alzheimer’s sjukdom (AD) är den vanligaste formen av demens och utgör 60-70% av dem 50 miljoner personer som lider av demens runtom i världen. Alzheimer’s är en degenerativ sjukdom som gör irreversibel skada till de delar av hjärnan som är associerade med minne och kognitiv förmåga. Mycket tid och resurser har gått till att utveckla metoder för att upptäcka och diagnostisera AD i dess tidiga stadier och ett forskningsområde som visar stor potential är djupinlärning. Det främsta problemet med djupinlärning inom AD diagnostik är bristen på relativt stora dataset som oftast är nödvändiga för att ett nätverk ska lära sig göra bra evalueringar. Målet med det här pappret är att utforska ifall en kombination av bildbaserad och numerisk data från MRI scanningar kan öka noggrannheten i ett nätverk. Tre olika djupinlärnings neurala nätverksmodeller konstruerades med TensorFlow ramverket för att användas som AD klassificerare med numerisk-, bild- och hybridbaserad indata samlade från OASIS-3 datasetet. Rapportens resultat visade att hybrid modellen hade en liten noggrannhets ökning i jämförelse med de bildbaserade och numeriska nätverken. Slutsatsen av den här studien visar att ett hybrid baserat nätverk visar lovande resultat som metod för att öka noggrannheten i ett nätverk ämnat för att diagnostisera AD. Dock är resultaten inte tillräckligt avgörande för att ge ett slutgiltigt svar.
252

Discovering plagiarism in introductory programming courses through the application of multiple methods / Upptäcka plagiat i introduktionsprogrammeringskurser genom applikation av flera metoder

Olsson, Glenn, Pålsson Norlin, Fredrik January 2020 (has links)
Plagiarism is a common problem in the academic world. Multiple studies show that over 30 % of students attending introductory university programming courses plagiarise their code at least once. However, as opposed to plagiarism of academic texts, plagiarised code is harder to discover as subtle changes can be made, for instance renaming variables. The field of code plagiarism is not uncharted and there are indeed a few suggested approaches. This thesis investigates if it is possible to combine multiple plagiarism detection techniques into a user-friendly tool intended for examiners of introductory programming courses. The tool, named DecPlag, not only checks for plagiarism between new submissions, but also keeps a database of previous course iterations so that foul play can be discovered between new students and older students as well. The resulting tool developed for the research turned out to become slow, taking multiple days to finish comparing the data of two course iterations. It also outputs a large number of false positives due to the structure of the research data. However, DecPlag did indeed find potential plagiarism cases as well. / Plagiat är ett vanligt problem i den akademiska världen. Flertalet studier visar att över 30 % av studenter som tar introduktionskurser i programmering på universitet har plagierat minst en gång. Tillskillnad från akademiska texter är det dock svårare att upptäcka plagiat i kod eftersom små ändringar kan utföras, till exempel namnbyte av variabler. Området att upptäcka plagiat i kod är inte nytt och det har tagits fram ett antal tillvägagångssätt. Denna avhandling undersöker om det är möjligt att kombinera flera metoder för att upptäcka plagiat till ett användarvänligt verktyg ämnat för examinatorer för introduktionsprogrammeringskurser. Verktyget, döpt DecPlag, letar inte bara efter plagiat mellan nya inlämningar, men sparar också en databas av tidigare kursomgångar för att också kunna upptäcka problem mellan nya och gamla studenter. Det resulterande verktyget visade sig vara långsamt, det tog flera dagar för att jämföra datan från två kursomgångar. Utdatan innehåller också en stor andel falska positiva på grund av strukturen på forskningsdatan. DecPlag hittade däremot potentiella plagiatsfall också.
253

Deep Learning Approach for Diabetic Retinopathy Grading with Transfer Learning / Användning av djupinlärning med överföringsinlärning för gradering av diabetisk näthinnesjukdom

Andersen, Linda, Andersson, Philip January 2020 (has links)
Diabetic retinopathy (DR) is a complication of diabetes and is a disease that affects the eyes. It is one of the leading causes of blindness in the Western world. As the number of people with diabetes grows globally, so does the number of people affected by diabetic retinopathy. This demand requires that better and more effective resources are developed in order to discover the disease in an early stage which is key to preventing that the disease progresses into more serious stages which ultimately could lead to blindness, and streamline further treatment of the disease. However, traditional manual screenings are not enough to meet this demand. This is where the role of computer-aided diagnosis comes in. The purpose of this report is to investigate how a convolutional neural network together with transfer learning can perform when trained for multiclass grading of diabetic retinopathy. In order to do this, a pre-built and pre-trained convolutional neural network from Keras was used and further trained and fine-tuned in Tensorflow on a 5-class DR grading dataset. Twenty training sessions were performed and accuracy, recall and specificity were evaluated in each session. The results show that testing accuracies achieved were in the range of 35% to 48.5%. The average testing recall achieved for class 0, 1, 2, 3 and 4 was 59.7%, 0.0%, 51.0%, 38.7% and 0.8%, respectively. Furthermore, the average testing specificity achieved for class 0, 1, 2, 3 and 4 was 77.8%, 100.0%, 62.4%, 80.2% and 99.7%, respectively. The average recall of 0.0% and average specificity of 100.0% for class 1 (mild DR) were obtained because the CNN model never predicted this class. / Diabetisk näthinnesjukdom (DR) är en komplikation av diabetes och är en sjukdom som påverkar ögonen. Det är en av de största orsakerna till blindhet i västvärlden. Allt eftersom antalet människor med diabetes ökar, ökar även antalet med diabetisk näthinnesjukdom. Detta ställer högre krav på att bättre och effektivare resurser utvecklas för att kunna upptäcka sjukdomen i ett tidigt stadie, vilket är en förutsättning för att förhindra vidareutveckling av sjukdomen som i slutändan kan resultera i blindhet, och att vidare behandling av sjukdomen effektiviseras. Här spelar datorstödd diagnostik en viktig roll. Syftet med denna studie är att undersöka hur ett faltningsnätverk, tillsammans med överföringsinformation, kan prestera när det tränas för multiklass gradering av diabetisk näthinnesjukdom. För att göra detta användes ett färdigbyggt och färdigtränat faltningsnätverk, byggt i Keras, för att fortsättningsvis tränas och finjusteras i Tensorflow på ett 5-klassigt DR dataset. Totalt tjugo träningssessioner genomfördes och noggrannhet, sensitivitet och specificitet utvärderades i varje sådan session. Resultat visar att de uppnådda noggranheterna låg inom intervallet 35% till 48.5%. Den genomsnittliga testsensitiviteten för klass 0, 1, 2, 3 och 4 var 59.7%, 0.0%, 51.0%, 38.7% respektive 0.8%. Vidare uppnåddes en genomsnittlig testspecificitet för klass 1, 2, 3 och 4 på 77.8%, 100.0%, 62.4%, 80.2% respektive 99.7%. Den genomsnittliga sensitiviteten på 0.0% samt den genomsnittliga specificiteten på 100.0% för klass 1 (mild DR) erhölls eftersom CNN modellen aldrig förutsåg denna klass.
254

Exploring the Potential for Machine Learning Techniques to Aid in Categorizing Electron Trajectories during Magnetic Reconnection / En utforskande studie om potentialen för maskininlärningstekniker att bistå vid kategorisering av elektrontrajektorier under magnetisk rekonnektion

Nyman, Måns, Ulug, Caner Naim January 2020 (has links)
Magnetic reconnection determines the space weather which has a direct impact on our contemporary technological systems. As such, the phenomenon has serious ramifications on humans. Magnetic reconnection is a topic which has been studied for a long time, yet still many aspects surrounding the phenomenon remain unexplored. Scientists within the field believe that the electron dynamics play an important role in magnetic reconnection. During magnetic reconnection, electrons can be accelerated to high velocities. A large number of studies have been made regarding the trajectories that these electrons exhibit and researchers in this field could easily point out what type of trajectory a specific electron exhibits given a plot of said trajectory. Attempting to do this for a more realistic number of electrons manually is however not an easy or efficient task to take on. By using Machine Learning techniques to attempt to categorize these trajectories, this process could be sped up immensely. Yet to date there has been no attempt at this. In this thesis, an attempt to answer how certain Machine Learning techniques perform in this matter was made. Principal component analysis and K-means clustering were the main methods applied after using different preprocessing methods on the given data set. The Elbow method was employed to find the optimal K-value and was complemented by Self-Organizing Maps. Silhouette coefficient was used to measure the performance of the methods. The First-centering and Mean-centering preprocessing methods yielded the two highest silhouette coefficients, thus displaying the best quantitative performances. However, inspection of the clusters pointed to a lack of perfect overlap between the classes detected by employed techniques and the classes identified in previous physics articles. Nevertheless, Machine Learning methods proved to possess certain potential that is worth exploring in greater detail in future studies in the field of magnetic reconnection. / Magnetisk rekonnektion påverkar rymdvädret som har en direkt påverkan på våra nutida teknologiska system. Således kan fenomenet ge allvarliga konsekvenser för människor. Forskare inom detta fält tror att elektrondynamiken spelar en viktig roll i magnetisk rekonnektion. Magnetisk rekonnektion är ett ämne som har studerats under lång tid men ännu förblir många aspekter av fenomenet outforskade. Under magnetisk rekonnektion kan elektroner accelereras till höga hastigheter. En stor mängd studier har gjorts angående trajektorierna som dessa elektroner uppvisar och forskare som är aktiva inom detta forskningsområde skulle enkelt kunna bestämma vilken sorts trajektoria en specifik elektron uppvisar givet en grafisk illustration av sagda trajektoria. Att försöka göra detta för ett mer realistiskt antal elektroner manuellt är dock ingen enkel eller effektiv uppgift att ta sig an. Genom användning av Maskininlärningstekniker för att försöka kategorisera dessa trajektorier skulle denna process kunna göras mycket mer effektiv. Ännu har dock inga försök att göra detta gjorts. I denna uppsats gjordes ett försök att besvara hur väl vissa Maskinlärningstekniker presterar i detta avseende. Principal component analysis och K-means clustering var huvudmetoderna som användes, applicerade med olika sorters förbehandling av den givna datan. Elbow-metoden användes för att hitta det optimala K-värdet och kompletterades av Self-Organizing Maps. Silhouette coefficient användes för att mäta resultaten av dessa metoder. Förbehandlingsmetoderna First-centering och Mean-centering gav de två högsta siluett-koefficienterna och uppvisade således de bästa kvantitativa resultaten. Inspektion av klustrarna pekade dock på avsaknad av perfekt överlappning, både mellan klasserna som upptäcktes av de tillämpade metoderna samt klasserna som har identifierats i tidigare artiklar inom fysik. Trots detta visade sig Maskininlärningsmetoder besitta viss potential som är värt att utforska i större detalj i framtida studier inom fältet magnetisk rekonnektion.
255

Building OCI Images With a Container Orchestrator : A comparison of OCI build-tools

Sjödin, Jonas January 2021 (has links)
Cloud computing is a quickly growing field in modern computing science where new technologies arise every day. One of the latest trends in cloud computing is container based technology, which  allows applications to run in a reproducible and stateless fashion without requiring manually installed dependencies. Another trend in computer science is DevOps, a methodology where developers take part in the operations process. DevOps popularise the use of CI/CD workflows, where automatic pipelines run tests and scripts on new code. A container orchestrator, like Kubernetes, can be used to control and modify containers. Kubernetes allows integrating multiple third-party applications that can monitor performance, analyze logs, and much more. Kubernetes can be integrated into the CI/CD system to utilise its container orchestration perks. Building containers inside a container can cause security issues because of native security flaws with OCI build tools. This thesis aims to look at these issues and analyse the field of container orchestrated OCI build tools using Kubernetes and OCI build tools. We also discover how to develop a test suite that can reliably test container orchestrated OCI build tools and export metrics. The thesis lastly compares different Dockerfile compliant Build tools with the test suite to find out which has the best performance and caching. The compared build tools are BuildKit, Kaniko, Img and Buildah and overall BuildKit and Kaniko are the fastest and most resource effective build tools. It is not obvious which build tool that is the most secure. Kaniko, which is a root container requires no privileges and is therefore tough to break out of but an eventual breakout will give the attacker root access to the host machine. BuildKit and Img only requires unconfined SECcomp and AppArmor which will make a container breakout more probable, even though less than Buildah which must be run in a privileged container. Since they can run rootless, the attacker will only have the same access to the host as that user in case of a container breakout.
256

SARS-CoV-2 Lineage Clustering : Using Unsupervised Machine Learning / Klustring av SARS-CoV-2 varianter med oövervakad maskininlärning

Hedlund, Amanda, Forsman, Fonzie January 2022 (has links)
The methods of sequencing genetic information and the access to this information has proved to be very useful in the research and understanding of viruses. It can for example be used to develop vaccines, manage pandemics, and attempt to map the virus’ spread and development. During the SARS-CoV-2 pandemic, a nomenclature for the virus has been created by the Pango database with the help of the GISAID database and other genetic databases. This study examines if a new grouping of the SARS-CoV-2 genomes from Sweden and Spain could provide new information or show trends in the genetic data by using two different clustering algorithms: k-means and agglomerative clustering. The k-means algorithm was chosen since it is scalable, which fits the large dataset. The agglomerative algorithm was chosen because it is an hierarchical algorithm that also can work as a summarization of data. The results mainly indicated a bias in the GISAID database, with the samples collected not being representative of the population and the true spread of the SARS-CoV-2 virus. The results also showed that the k-means algorithm can create groupings of similar quality as the Pango lineages in some aspects, but also that it is hard to quantify how good a grouping is with this type of data. The agglomerative clustering showed that the sequences are overall similar, but there are some difference between bigger variants of the virus. To further test and evaluate these conclusions, a bigger data set consisting of multiple countries should be tested. / Sekvensering av genetisk information och tillgången till denna information har varit fördelaktig i forskningen och förståelsen av virus. Den kan användas för att bland annat ta fram vaccin, hantera pandemier och försöka kartlägga virusets framfart och utveckling. Under SARS-CoV-2 pandemin har en nomenklatur tagits fram av Pango databasen med hjälp av data från GISAID samt andra genetiska databaser. Denna uppsats undersöker om en ny gruppering av SARS-CoV-2 genom från Sverige och Spanien kan ge ny information och visa på trender i den genetiska datan med hjälp av två olika klustrings algoritmer: k-means och agglomerativ klustring. K-means algoritmen valdes då den skalar bra, vilket är passande för det stora datasetet. Den agglomerativa algoritmen valdes då det är en hierarkisk och sammanfattande algoritm. Resultatet pekade främst på bias i databasen och att den samlade datan inte är representativ för populationen och den faktiska spridningen av SARS-CoV-2 viruset. Resultatet visade också att k-means algoritmen kan skapa grupperingar av liknande kvalitet som Pango lineages i vissa aspekter, men också att det är svårt att mäta hur bra grupperingar är för denna typ av data. Den agglomerativa klustringen pekade på att de olika sekvenserna generellt är lika varandra, men att det finns viss skillnad mellan större varianter av viruset. För att ytterliggare testa dessa slusatser bör ett större dataset bestående av fler länder testas.
257

Nonuniform bandpass sampling in radio receivers

Sun, Yi-Ran January 2004 (has links)
As an interface between radio receiver front-ends and digital signal processing blocks, sampling devices play a dominant role in digital radio communications. As an interface between radio receiver front-ends and digital signal processing blocks, sampling devices play a dominant role in digital radio communications. Based on different sampling theorems (e.g., classic Shannon’s sampling theorem, Papoulis’ Generalized sampling theorem, bandpass sampling theory), signals are processed by the sampling devices and then undergo additional processing. It is a natural goal to obtain the signals at the output of the sampling devices without loss of information. In conventional radio receivers, all the down-conversion and channel selection are realized in analog hardware. The associated sampling devices in A/D converters are based on the classic Shannon’s sampling theorem. Driven by the increased speed of microprocessors, there is a tendency to use mixed-signal/digital hardware and software to realize more functions (e.g., down-conversion, channel selection, demodulation and detection) in a radio communication system. The new evolution of radio receiver architecture is Software Defined Radio (SDR). One design goal of SDR is to put the A/D converter as close as possible to the antenna. BandPass Sampling (BPS) enables one to have an interface between the higher IF and the A/D converter by a sampling rate of 2B or more (B is the information bandwidth), and it might be a solution to SDR. A signal can be uniquely determined from the samples by NonUniform Sampling (NUS) such that NUS has the potential to suppress harmful signal spectrum aliasing. BPS makes use of the signal spectrum aliasing to represent the signal uniquely at any band position. A harmful aliasing of signal spectrum will cause a performance degradation. It is of great benefit to use NUS scheme in BPS system. However, a signal cannot be recovered from its nonuniform samples by using only an ideal lowpass filter (or the classic Shannon’s reconstruction function). The reconstruction of the samples by NUS is crucial for the implementation of NUS. Besides the harmful signal spectrum aliasing, noise aliasing and timing jitter are other two sources of performance degradation in a BPS system. Noise aliasing is the direct consequence of lower sampling rate of subsampling. With the increase of input frequency by directly sampling a signal at higher IF, the timing error of the sampling clock causes large jitter effects on the sampled-data signal. In this thesis work, first, a filter generalized by a certain Reconstruction Algorithm (RA) is proposed to reconstruct the signal from its nonuniform samples. A Based on different sampling theorems (e.g., classic Shannon’s sampling theorem, Papoulis’ Generalized sampling theorem, bandpass sampling theory), signals are processed by the sampling devices and then undergo additional processing. It is a natural goal to obtain the signals at the output of the sampling devices without loss of information. In conventional radio receivers, all the down-conversion and channel selection are realized in analog hardware. The associated sampling devices in A/D converters are based on the classic Shannon’s sampling theorem. Driven by the increased speed of microprocessors, there is a tendency to use mixed-signal/digital hardware and software to realize more functions (e.g., down-conversion, channel selection, demodulation and detection) in a radio communication system. The new evolution of radio receiver architecture is Software Defined Radio (SDR). One design goal of SDR is to put the A/D converter as close as possible to the antenna. BandPass Sampling (BPS) enables one to have an interface between the higher IF and the A/D converter by a sampling rate of 2B or more (B is the information bandwidth), and it might be a solution to SDR. A signal can be uniquely determined from the samples by NonUniform Sampling (NUS) such that NUS has the potential to suppress harmful signal spectrum aliasing. BPS makes use of the signal spectrum aliasing to represent the signal uniquely at any band position. A harmful aliasing of signal spectrum will cause a performance degradation. It is of great benefit to use NUS scheme in BPS system. However, a signal cannot be recovered from its nonuniform samples by using only an ideal lowpass filter (or the classic Shannon’s reconstruction function). The reconstruction of the samples by NUS is crucial for the implementation of NUS. Besides the harmful signal spectrum aliasing, noise aliasing and timing jitter are other two sources of performance degradation in a BPS system. Noise aliasing is the direct consequence of lower sampling rate of subsampling. With the increase of input frequency by directly sampling a signal at higher IF, the timing error of the sampling clock causes large jitter effects on the sampled-data signal. In this thesis work, first, a filter generalized by a certain Reconstruction Algorithm (RA) is proposed to reconstruct the signal from its nonuniform samples. A
258

ENTLL: A High-Performance and Lightweight Detector for AI-Generated Text

Jaallouk, Mohamad January 2024 (has links)
The rapid advancements in large language models (LLMs) have led to the generation of highlyfluent and coherent text, posing significant challenges in distinguishing between human-writtenand machine-generated content. This thesis introduces Entll, a novel approach for detecting AIgenerated text by combining negative log-likelihood (NLL), ranks, and logit entropy analysis. Entlluses a compact 1.8B parameter model for feature extraction and aims to capture distinctive patternsthat differentiate AI-generated text from human-written text. The proposed method is evaluated ondiverse datasets spanning various domains and compared against state-of-the-art zero-shot detectors, namely Binoculars and Fast-DetectGPT. Experimental results demonstrate Entll’s competitiveaccuracy and low false positive rates across different datasets and domains. Moreover, Entll exhibits significant computational efficiency, being 4.9 s faster than Fast-DetectGPT and 108.1 s fasterthan Binoculars. The results indicate that Entll is a promising solution for the reliable and rapiddetection of AI-generated text, although challenges such as memorization and robustness againstadversarial attacks remain areas for future research.
259

CRM-optimering för Webbpoolen AB

Amedi, Roberto Piran January 2024 (has links)
This project focused on developing a CRM system for Webbpoolen AB entirely from the ground up, aiming to translate theoretical knowledge in system development into practical applications within a business environment. By utilizing the MVC architecture from the start, the work resulted in a tailor-made and functional platform that effectively supports the management of customer information, user registration, and authentication, as well as integrated communication mechanisms such as email and SMS. A central aspect of the project was to ensure the system's security and user-friendliness. The system was built on a LEMP stack, and through the use of MVC architecture, we were able to create a flexible system that is both robust and easy to maintain. This also allowed us to develop advanced features from scratch, including a custom-developed routing system that enhances the system's functionality and operational efficiency by enabling a high degree of customization to meet the specific needs of the company. In addition to the basic functions, the project also included the development of advanced features that contribute to a more comprehensive and in-depth management of customer interactions and business processes. The project not only provided a practical solution to the initial problem statement but has also been an extensive learning process that deepened the understanding of the complexities and challenges within modern system development. / Detta projekt har fokuserat på att utveckla ett CRM-system för Webbpoolen AB helt från grunden, med målet att omsätta teoretiska kunskaper inom systemutveckling till praktiska tillämpningar i en företagsmiljö. Genom att använda MVC-arkitekturen från början har arbetet resulterat i en skräddarsydd och funktionell plattform som effektivt stödjer hantering av kundinformation, användarregistrering och autentisering, samt integrerade kommunikationsmekanismer som e-post och SMS. En central aspekt av projektet har varit att säkerställa systemets säkerhet och användarvänlighet. Systemet byggdes på en LEMP-stack, och genom användningen av MVC-arkitekturen kunde vi skapa ett flexibelt system som är både robust och lätt att underhålla. Detta har också medfört att vi kunnat utveckla avancerade funktioner från grunden, inklusive ett egenutvecklat routingsystem som förbättrar systemets funktionalitet och den operativa effektiviteten genom att möjliggöra en hög grad av anpassning efter företagets specifika behov. Utöver de grundläggande funktionerna har projektet även inkluderat utvecklingen av avancerade funktioner som bidrar till en mer omfattande och djupgående hantering av kundinteraktioner och affärsprocesser. Projektet har inte bara tillhandahållit en praktisk lösning på den initiala problemställningen utan har också varit en omfattande läroprocess som fördjupat förståelsen för de komplexiteter och utmaningar som finns inom modern systemutveckling.
260

Optimizing Office Utilization : The Development of a Desk Booking System for Modern Hybrid Workplaces

Meander, Lina January 2024 (has links)
The COVID-19 pandemic has significantly transformed the modern workplace, particularly influencing the shift towards hybrid work models. In Switzerland, government mandates necessitated the reduction of physical office spaces, leading to new challenges as employees began returning to the office. This essay examines the development of a desk booking platform designed to optimize workspace utilization and enhance employee satisfaction in a hybrid work environment. The platform, built with ExpressJS, React, and a MySQL database, integrates Microsoft Entra for secure authentication, addressing logistical inefficiencies in desk management. By exploring the motivations, design considerations, and expected outcomes, this essay highlights the potential of technology to streamline office operations and adapt to the evolving dynamics of workplace attendance postpandemic. The project’s methodology, incorporating agile development practices, ensures responsiveness to changing requirements.

Page generated in 0.0557 seconds