• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 184
  • 78
  • 27
  • 13
  • 9
  • 8
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 455
  • 123
  • 121
  • 99
  • 96
  • 90
  • 70
  • 68
  • 66
  • 62
  • 59
  • 53
  • 51
  • 49
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

The effects of blurring vision on medio-lateral balance during stepping up or down to a new level in the elderly

Buckley, John, Elliott, David, Heasley, Karen, Scally, Andy J. 19 August 2009 (has links)
No / Visual impairment is an important risk factor for falls, but relatively little is known about how it affects stair negotiation. The present study determined how medio-lateral (ML) dynamics of stepping and single limb support stability when stepping up or down to a new level were affected by blurring the vision of healthy elderly subjects. Twelve elderly subjects (72.3±4.2years) were analysed performing single steps up and single steps down to a new level (7.2, 14.4 and 21.6cm). Stepping dynamics were assessed by determining the ML ground reaction force (GRF) impulse, lateral position of the centre of mass (CM) relative to the supporting foot (average horizontal ML distance between CM and CP during single support) and movement time. Stability was determined as the rms fluctuation in ML position of the centre of pressure (CP) during single support. Differences between optimal and blurred visual conditions were analysed using a random effects model. Duration of double and single support, and the ML GRF impulse were significantly greater when vision was blurred, while the average CM¿CP ML distance and ML stability was reduced. ML stability decreased with increasing step height and was further decreased when stepping down than when stepping up. These findings indicate that ML balance during stepping up and down was significantly affected by blurring vision. In particular, single limb support stability was considerably reduced, especially so during stepping down. The findings highlight the importance of accurate visual feedback in the precise control of stepping dynamics when stepping up or down to a new level, and suggest that correcting common visual problems, such as uncorrected refractive errors and cataract may be an important intervention strategy in improving how the elderly negotiate stairs.
82

Le compromis Débit-Fiabilité-Complexité dans les systèmes MMO multi-utilisateurs et coopératifs avec décodeurs ML et Lattice / Rate - Reliability - Complexity limits in ML and Lattice based decoding for MIMO, multiuser and cooperative communications

Singh, Arun Kumar 21 February 2012 (has links)
Dans les télécommunications, le débit-fiabilité et la complexité de l’encodage et du décodage (opération à virgule flottante-flops) sont largement reconnus comme représentant des facteurs limitant interdépendants. Pour cette raison, tout tentative de réduire la complexité peut venir au prix d’une dégradation substantielle du taux d’erreurs. Cette thèse traite de l’établissement d’un compromis limite fondamental entre la fiabilité et la complexité dans des systèmes de communications « outage »-limités à entrées et sorties multiples (MIMO), et ses scénarios point-à-point, utilisateurs multiple, bidirectionnels, et aidés de feedback. Nous explorons un large sous-ensemble de la famille des méthodes d’encodage linéaire Lattice, et nous considérons deux familles principales de décodeurs : les décodeurs à maximum de vraisemblance (ML) et les décodeurs Lattice. L‘analyse algorithmique est concentrée sur l’implémentation de ces décodeurs ayant comme limitation une recherche bornée, ce qui inclue une large famille de sphère-décodeurs. En particulier, le travail présenté fournit une analyse à haut rapport Signal-à-Bruit (SNR) de la complexité minimum (flops ou taille de puce électronique) qui permet d’atteindre a) une certaine performance vis-à-vis du compromis diversité-gain de multiplexage et b) une différence tendant vers zéro avec le non-interrompu (optimale) ML décodeur, ou une différence tendant vers zéro comparé à l’implémentation exacte du décodeur (régularisé) Lattice. L’exposant de complexité obtenu décrit la vitesse asymptotique d’accroissement de la complexité, qui est exponentielle en terme du nombre de bits encodés. / In telecommunications, rate-reliability and encoding-decoding computational complexity (floating point operations - flops), are widely considered to be limiting and interrelated bottlenecks. For this reason, any attempt to significantly reduce complexity may be at the expense of a substantial degradation in error-performance. Establishing this intertwined relationship constitutes an important research topic of substantial practical interest. This dissertation deals with the question of establishing fundamental rate, reliability and complexity limits in general outage-limited multiple-input multiple-output (MIMO) communications, and its related point-to-point, multiuser, cooperative, two-directional, and feedback-aided scenarios. We explore a large subset of the family of linear lattice encoding methods, and we consider the two main families of decoders; maximum likelihood (ML) based and lattice-based decoding. Algorithmic analysis focuses on the efficient bounded-search implementations of these decoders, including a large family of sphere decoders. Specifically, the presented work provides high signal-to-noise (SNR) analysis of the minimum computational reserves (flops or chip size) that allow for a) a certain performance with respect to the diversity-multiplexing gain tradeoff (DMT) and for b) a vanishing gap to the uninterrupted (optimal) ML decoder or a vanishing gap to the exact implementation of (regularized) lattice decoding. The derived complexity exponent describes the asymptotic rate of exponential increase of complexity, exponential in the number of codeword bits.
83

OCR-skanning i Android med Google ML Kit : En applikation för sammanställning av kvitton

van Herbert, Niklas January 2022 (has links)
Om två parter med delad ekonomi vill se över och räkna på sina inköp gjort från matvarubutiker finns två alternativ. Att spara alla fysiska kvitton för att sedan manuellt hantera uträkningen eller att använda digitala kvitton, vilket långt ifrån alla matvarubutiker erbjuder. Det finns heller inga bestämmelser mellan företagen kring vart dessa kvitton ska lagras vilket medför att en användare kan behöva logga in på flera olika platser. Användaren har alltså valet att manuellt hantera fysiska kvitton eller att manuellt hantera digitala kvitton, alternativt en blandning av båda. Oavsett kvittots form måste alla kvitton gås igenom för att se vilken person som gjort köpet, vilka eventuella varor som ska plockas bort och vad totalsumman är. Syftet med detta projekt har därför varit att skapa en applikation i Android som med hjälp av OCR-biblioteket Google ML Kit tillåter två användare att hantera sina kvitton. Rapporten undersöker de svårigheter som finns vid textigenkänning samt presenterar de tekniker och metoder som har använts under skapandet av applikationen. Applikationen utvärderades sedan genom att extrahera text från flera olika kvitton. Google’s OCR-bibliotek jämfördes också med Tesseract OCR för att undersöka om valet av ett annat OCR-bibliotek hade kunnat förbättra pålitligheten i kvittoskanningen. Slutresultatet är att applikationen fungerar väl vid korrekt inskanning, det finns dock stora svårigheter att extrahera text från kvitton som avviker från de kvittomallar som använts under implementationen. / For two parties with shared finances who wants to review and count their purchases made from grocery stores, there are two options. To save all physical receipts and then handle the calculation manually or to use digital receipts, which far from all grocery stores offer. There are also no provision between companies on where these receipts should be stored, which means that a user may have to log in at several different locations. The user thus has the choice of manually managing physical receipts or manually managing digital receipts, or in worst case a mixture of both. Regardless of the form of the receipt, all receipts must be reviewed to see which person made the purchase, which items, if any, should be removed and what the total cost is. The aim of this project has therefore been to create an application in Android using the Google ML Kit OCR library that allows two users to manage their receipts. The report examines the difficulties encountered in text recognition and presents the techniques and methods used during the creation of the application. The application was then evaluated by extracting text from several different receipts. Google’s OCR library was also compared with Tesseract OCR to investigate whether the choice of a different OCR library could have improved the reliability of receipt recognition. The final result is a application that works well when a receipt is scanned correctly, however there are significant difficulties in extracting text from receipts that differ from the receipt templates used during implementation.
84

MLOps paradigm - a game changer in Machine Learning Engineering?

Francois Regis, Dusengimana January 2023 (has links)
In the last 5+ years, researchers and the industry have been working hard to adopt MLOps (Machine Learning Operations) to maximize production. The current literature on MLOps is still mostly disconnected and sporadic (Testi et al., 2022). This study conducts mixed-method research, including a literature review, survey questionnaires, and expert interviews to address this gap. The researcher provides an aggregated overview of the necessary principles, components, roles, and the associated architecture and workflows resulting from these investigations. Furthermore, this research furnishes a definition of MLOps and addresses open challenges in the field. Finally, this work proposes a MLOps pipeline to implement product recommendations on the e-commerce platform to guide ML researchers and practitioners who want to automate and operate their ML products.
85

Evaluating the Performance of Machine Learning on Weak IoT devices

Alhalbi, Ahmad January 2022 (has links)
TinyML är ett snabb växande tvärvetenskapligt område i maskininlärning. Den fokuserar på att möjliggöra maskininlärnings algoritmer på inbyggda enheter (mikrokontroller) som arbetar vid lågt effektområde. Syftet med denna studie är att analysera hur bra TinyML kan är lösa typiska ML-uppgifter. Studien hade fyra forskningsfrågor som svarades genom att undersöka olika litteraturstudier och implementera testmodell både på laptop och på inbyggda enheter (Arduino nano 33). Implementationen började med att skapa maskininlärningsmodell i form av sinusfunktion genom att skapa ett 3- lagers, fullt anslutet neuralt nätverk som kan förutsäga sinusfunktionens utdata, på detta sätt används modellen som en regressionsanalys. Idéen är att träna modellen som accepterar värden mellan 0 och 2π och sedan matar ut ett värde mellan -1 och 1. Därefter konverteras modellen till en Tensorflow Lite för att kunna distribuera den på Arduino nano 33. Resultatet visade att TinyML är bra lösning för att lösa ML-uppgifter eftersom det lyckades överföra ML-algoritmen till mikrokontrollen Arduino nano 33. TinyML kunde hantera och bearbeta data utan behov till internetanslutning vilket gav möjlighet för utvecklare att programmera på ett effektivt och lämpligt sätt. TinyML verkar ha en ljus framtid och många vetenskapliga studier påpekar att maskininlärningens största fotavtryck i framtiden kan vara genom TinyML. / TinyML is a rapidly growing interdisciplinary field in machine learning. They focus on enabling machine learning algorithms on built-in devices (microcontrollers) that work at low power ranges. The purpose of this study is to analyze how well Tiny-ML can solve typical ML tasks. The study had four research questions that were answered by examining different literature studies and implementing test model both on laptop and on built-in devices (Arduino nano 33). The implementation began with creating a machine learning model in the form of a sine function by creating a 3-layer, fully connected neural network that can predict the output of the sine function, in this way the model is used as a regression analysis. The idea is to train the model that accepts values between 0 and 2π and then outputs a value between -1 and 1. Then the model is converted to a Tensorflow Lite to be able to distribute it on the Arduino nano 33. The results showed that TinyML is a good solution for solving ML data, as they managed to transfer the ML algorithm to the microcontroller Arduino nano 33. They could handle and process data without the need for an Internet connection, which allowed developers to program, anywhere and anytime any. TinyML seems to have a bright future and many scientific studies point out that the biggest footprint of machine learning in the future may be through TinyML.
86

Application of mathematical modelling to describe and predict treatment dynamics in patients with NPM1-mutated Acute Myeloid Leukaemia (AML)

Hoffmann, Helene 11 September 2023 (has links)
Background: Acute myeloid leukaemia (AML) is a severe form of blood cancer, which in many cases can not be cured. Although chemotherapeutic treatment is effective in most cases, often the disease relapses. To monitor the course of disease, as well as to early identify a relapse, the leukaemic cell burden in the bone marrow is measured. In the genome of these cells certain mutations can be found, which lead to the occurrence of leukaemia. One of those mutations is in the neucleophosmin 1 (NPM1) gene. This mutation is found in about one third of all AML patients. The burden of leukaemic cells can be derived from the proportion of NPM1 transcripts carrying this mutation in a bone marrow sample. These values are measured routinely at specific time points during treatment and are then used to categorise the patients into defined risk groups. In the studies, the data for this work originates from, the NPM1 burden was measured beyond the treatment period. That leads to a more comprehensive picture of the molecular course of disease of the patients. Hypothesis: My hypothesis is that the risk group categorisation can be improved by taking into account the dynamic time course information of the patients. Another hypothesis of this work is that with the help of statistical methods and computer models the time course data can be used to describe the course of disease of AML patients and assess whether they will experience a relapse or not. Materials and Methods: For these investigations I was provided with a dataset consisting of quantitative NPM1 time course measurements of 340 AML patients (with a median of 6 mea- surements per patient). To analyse this data I used statistical methods, such as correlation, logistic regression and survival time analysis. For a better understanding of the course of disease I developed a mechanistic model describing the dynamics of the cell numbers in the bone marrow of an AML patient. This model can be fitted to the measurements of a patient by adjusting two parameters, which represent the individual severity of disease. To predict a possible relapse within 2 years after beginning of treatment, I used data that was generated using the mechanistic model (synthetic data). For the prediction three different methods were compared: the mechanistic model, a recurrent neural network (RNN) and a generalised linear model (GLM). Both, the RNN and the GLM were trained and tuned on part of the synthetic data. Afterwards all three methods were tested using the so far unseen part of the data set (test data). Results: Following the analysis of the data I found that the decreasing slope of NPM1 burden during primary treatment as well as the absolute burden after the treatment harbour information about the further course of disease. Specifically, I found that a faster decrease of NPM1 burden and a lower final burden lead to a better prognosis. Further, I could show that the developed simple mechanistic model is able to describe the course of disease of most patients. When I divided the patients into two different risk groups using the fitted parameters from the model I could show that the patients in those groups show distinct relapse-free survival times. The categorisation using the parameters lead to a better distinction of groups than using current categorisation by the WHO. Further, I tried to predict a 2-year relapse using synthetic data and three different prediction methods. I could show that it had nearly no impact at all which method I used. Much more important, however, was the quality of data. Especially the sparseness of data, which we find in the time courses of AML patients, has a considerable negative effect on the predictability of relapse. Using a synthetic data set with measurement times oriented on the times of chemotherapy I could show that a sophisticated measurement scheme could improve the relapse predictability. Conclusions: In conclusion, I suggest to include the dynamic molecular course of the NPM1 burden of AML patients in clinical routine, as this harbours additional information about the course of disease. The involvement of a mechanistic model to asses the risk of AML patients can help to make more accurate predictions about their general prognosis. An accurate prediction of the time of relapse is not possible. All three used methods (mechanistic model, statistical model and neural network) are in general suitable to predict relapse of AML patients. For reliable predictions, however, the quality of the data needs to be drastically improved.
87

Artificiell intelligens i svenska kommuners redovisningsprocess / Artificial Intelligence in Swedish municipalities accounting process

Lejon, Oskar, Hansson, Simon January 2024 (has links)
Bakgrund: Redovisningen varierar mellan näringsdrivna organisationer och kommuner, vilket främst beror på att kommuner finns för att uppnå olika syftet. Kommuner i Sverige är en administrativ enhet som ansvarar för offentlig service, de styrs av lokalt valda politiker och finansieras genom kommunalskatt och statsbidrag. Kommunernas redovisningsprocess omfattar transaktioner, bokföring, bokslut och rapportering. Redovisningen använder sig av teknologier som ML och RPA för att effektivisera redovisningsprocessen. Med hjälp av dessa teknologier kan transaktioner och konteringar ske automatiskt, vilket bidrar med ökad effektivitet och kvalitet. Syfte: Avsikten med studien är att redogöra för vilka AI-baserade teknologier som används i svenska kommuners redovisningsprocessen och även vilka steg av redovisningsprocessen som AI-teknologier adopterats. Utöver detta redogörs även för vilka överväganden som görs av svenska kommuner när det gäller adoption av AI-teknologier i redovisningsprocessen? Teoretiska referensramen: Den teoretiska referensramen kommer att användas för att analysera empirin och besvara frågeställningarna. Kapitlet förklarar därför teorier och begrepp som är väsentligt för att kunna förstå studien resultat. Det är viktigt för att skapa en förståelse och kontext för ämnet samt visa kopplingen mellan teori och praktik, vilket kommer att framgå i analysavsnittet. Exempel på begrepp och teorier som referensramen tar upp är AI och AI i redovisningssystem, redovisning och redovisningsprocess samt Rogers adaptionsteori (2003). Metod: Studien använder en kvalitativ metod, i form av semistrukturerade intervjuer för att samla in data från fem olika kommuner. Detta utgör insamlade svar från redovisningschefer, IT-ekonomer och ekonomichefer, därmed har respondenterna en förståelse för kommunens redovisningsprocess. Denna metod används för att tillåta en förståelse av olika utmaningar och åsikter som finns kring AI i den kommunala redovisningen. Valet av kommunerna grundade sig på att de skulle ha olika storlekar och därmed omfattade studien en bredare grupp. Slutsats: Studien åskådliggör att kommuner inte aktivt valt att adoptera AI inom redovisningsprocessen. Däremot finns det inslag av AI i deras redovisningsprocess, detta då kommunerna använder sig av ML- och RPA processer som till viss del består av AI i de automatiserade delarna. De olika stegen som berörs av AI- teknologin är främst transaktions- och bokförings steget. Det främsta övervägandet vid en adoption har baserats på att fördelarna med AI är att det bidrar med en förbättring av effektiviteten och kvaliteten i kommunernas redovisningsprocess. Medan nackdelarna främst berör kostnadsaspekter och deras medborgerliga ansvar som kopplas till att de nya teknologierna är obeprövade. / Background: Accounting varies between profit-driven organizations and municipalities, primarily because municipalities exist to achieve different purposes. Municipalities in Sweden are an administrative unit responsible for public service, governed by locally elected politicians, and funded through municipal taxes and government grants. The municipal accounting process includes transactions, bookkeeping, financial statements, and reporting. Accounting utilizes technologies such as ML and RPA to streamline the accounting process. With these technologies, transactions and account postings can occur automatically, contributing to increased efficiency and quality.  Purpose: The aim of the study is to account for which AI-based technologies are used in the accounting processes of Swedish municipalities and which stages of the accounting process AI technologies have been adopted. In addition, the considerations why municipalities have or have not chosen to adopt AI technologies are also discussed.  Theoretical Framework: The theoretical framework will be used to analyze the empirical data and answer the research questions. This chapter, therefore, explains theories and concepts that are essential for understanding the results of the study. It is important to create an understanding and context for the topic and to show the connection between theory and practice, which will be evident in the analysis section. Examples of concepts and theories discussed in the framework include AI and AI in accounting systems, accounting and accounting processes, and Rogers' adoption theory (2003).  Method: The study employs a qualitative method, in the form of semi-structured interviews, to collect data from five different municipalities. This constitutes collected responses from accounting managers, It economists, and financial managers, therefore the respondents understand the municipality's accounting process. This method is used to allow an understanding of the various challenges and options that exist regarding AI in municipal accounting. The choice of municipalities was based on having different sizes, thereby the study encompassed a broader group.  Results and Conclusion: The study illustrates that municipalities have not actively chosen to adopt AI within the accounting process. However, there are elements of AI in these processes as the municipalities use ML and RPA processes that include AI to some extent in the automated parts. The primary reasoning for the adoption is based on improving the efficiency and quality of the municipality's accounting processes. The main reasoning for why/ why not an adoption has occurred is based on the benefits of AI contributing to improvements in efficiency and quality in the municipalities' accounting processes. Meanwhile, the disadvantages primarily concern cost aspects and their civic responsibility linked to the new technologies being unproven.
88

The pianism of Paderewski

Pluta, Agnieszka January 2014 (has links)
Many aspects of Ignaz Jan Paderewski’s life and career have been the subject of previous research, but some important areas remain uninvestigated. Moreover, many biographies, especially those written in English, have hitherto rarely adopted a critical stance. My aim here is to examine those elements of Paderewski’s performance style that have not hitherto been fully studied. Unique Polish sources include unpublished letters written to his father and Helena Górska, his secretaries’ letters written in 1935 and between 1938-39, and of course his correspondence with his pupils, which sheds considerable new light on his views on, and success in, piano teaching. This dissertation discusses in detail his stylistic approach, attitude towards piano playing, preparation for performance and methods of interpretation. Unpublished letters between Paderewski and his pupils deal with such issues as: choosing concert programmes, techniques of pedalling and advanced interpretational issues. To further evaluate changes in Paderewski’s playing style over his career I have analysed a representative selection of his recordings made over the course of his career. Although Paderewski’s style did not change radically, some of the recorded pieces do demonstrate significant differences in interpretation, and his experiments in phrasing, dynamics, tempo and pedaling. I additionally compare some of the recordings of the same pieces by Paderewski and his contemporaries. For instance, Arthur Friedheim’s recording of Liszt’s Hungarian Rhapsody No. 2 in C sharp minor. An approach such as this will illuminate, for example, some differences in style between representatives of the ‘Liszt School’ (of which Friedheim was one of the most celebrated exponents) and that of Leschetizky (as represented by Paderewski). This documentation and evaluation of Paderewski’s performance style has naturally influenced my own performances of his works. The accompanying recital therefore includes one of Paderewski’s most substantial piano pieces, the Sonata in E flat minor, contrasted by a Sonata by Paderewski’s contemporary, Sergei Rachmaninov, and completed by works of Chopin in Paderewski’s repertoire, and a piece by his pupil, Ernest Schelling, also recorded by Paderewski. The recital therefore constitutes a practical application of Paderewski’s performance and programming styles as discussed in the dissertation.
89

Church music and Protestantism in post-Reformation England : discourses, sites & identities

Willis, Jonathan Peter January 2009 (has links)
This thesis is an interdisciplinary examination of the role religious music played in the formation of Protestant religious identities during the Elizabethan phase of the English Reformation. It is allied with current post-revisionist trends in seeking to explain how the population of sixteenth-century England adjusted to the huge doctrinal upheaval of the Reformation. It also seeks to move post-revisionism onwards, by suggesting that the synthetic patchwork of beliefs which emerged during the English Reformation was nonetheless distinctively Protestant, and that we must redefine our notion of what it actually meant to be Protestant in the context of post-Reformation England. The first of three sections, ‘Discourses’, explores the classical and religious discourses which underpinned sixteenth-century understandings of music, and its use in religious worship. Chapter one investigates the strengthening and importance of neo-classical notions of speculative music during the Renaissance, while chapter two explores how these notions affected the way Protestant reformers thought about, wrote about, and used music in public worship. Section two, ‘Sites’, looks at the practice of Church music in the parish and the cathedral church. Chapter three uses qualitative and quantitative data from churchwardens’ accounts to document changing patterns of musical expenditure in the Elizabethan parish, while chapter four focuses on the cathedral, and challenges received notions about the supposed dichotomy between parish and cathedral worship practices. The third and final section, ‘Identities’, shifts its attention to the people of Elizabethan England, and the ways in which music both served and shaped the processes of religious identity formation. Chapter five looks at music as a tool of pedagogy, propaganda and devotional piety, in church, schoolroom and home, while chapter six concentrates on the ways in which Church music both reinforced and complicated notions of communal and individual identity, acting as a source of both harmony and discord.
90

Words, ideas and music : a study of Tchaikovsky's last completed work, the Six Songs, Opus 73

Rudeforth, Helen Elizabeth January 1999 (has links)
This study focuses on P.I. Tchaikovsky's last completed work, the richly symbolic Six Songs, Opus 73. It demonstrates for the first time how Tchaikovsky's significant literary talents impacted on his song output in general, and on this cycle of songs in particular, providing us also with new insights into his personality. The composer selected and sequenced the poems used for the Opus 73 set to form the cycle of texts himself. The resulting songs are underpinned by a network of internal connections, which parallel the techniques used in the original poems in remarkable ways and link subtly with coded fate messages found elsewhere in the composer's output. The study presents evidence which enhances Pyotr Il'ich's reputation as a skilled manipulator of words, ideas and music.

Page generated in 0.0324 seconds