• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 19
  • 11
  • 10
  • 9
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 155
  • 27
  • 21
  • 21
  • 20
  • 19
  • 19
  • 18
  • 17
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Investigating innovation : measurement, standardization and practical application

Boonzaaier, Gerhardus Petrus 29 April 2010 (has links)
Growing competition, globalisation and changing circumstances make innovation a prerequisite for the growth, success and survival of any private or public organisation. While innovation in technology, production, marketing and finance all remain essential, it is innovation in management that is most desperately in short supply. A literature study could not reveal the existence of any scale that measures all the factors and processes relevant to organisational innovation. A scale for managerial innovation was developed. This scale is based on the work of various researchers in the field of innovation. The major tasks in the process were connected to the structural arrangements and social patterns that facilitate the tasks are discussed. Innovation consists of a set of processes carried out at the micro-level, by individuals and groups of individuals, and these micro-processes are in turn stimulated, facilitated and enhanced - or the opposite - by a set of macro-structural conditions. A semantic differential scale was developed to measure managerial innovation. The scale consists of 88 items and was designed to reflect the major factors and processes of organisational innovation. Various statistical tests were used to evaluate the scale and data obtained through the scale. Five Factors were identified after the data was analysed using factor analysis. The five factors are Factor 1 (leadership and culture), Factor 2 (employee acquisition and development), Factor 3 (variables that facilitate problem solving and aid in innovation), Factor 4 (variables that impact negatively on innovation), and Factor 5 (variables external to the organisation that influence innovation). The Alpha Cronbach test for reliability showed a very high degree of reliability and the scale conformed to the criteria of content validity. Analysis Of Variance (ANOVA) was used to perform comparative analysis on the biographical variables. The relationships between age, gender, level of education, industry, length of service, and the combined effect of age and gender, age and length of service, gender and industry, and gender and length of service and the five factors were analysed. Age seems to play a significant role in Factor 1 and Factor 2 (i.e. leadership and culture as well as employee acquisition and development). For Factor 1 and Factor 2 average achievement in terms of innovation seems to increase with age. With regard to Factor 3, 4 and 5 age does not seem to impact on achievement significantly. The results of this study indicate that there are no significant relationship between gender and innovation. The results of this study indicate that there is a positive relationship between level of education and innovation for Factors 3, 4 and 5. It was found to differ significantly between the levels for two factors, namely Factor 1: leadership and culture, and Factor 2: employee acquisition and development. They seem to decline as the level of education increases. The results of this study indicate that for all five factors there seem to be a very significant difference in average achievement when individuals from different industries are compared. The results of this study indicate that there is not a significant relationship between length of service and innovation. The ANOVA results for combined variables indicate a significant difference in average achievement Factor 1 (leadership and culture) when the research participants are grouped based on both age and gender. In general, males of any age group tested equal to or higher than their female counterparts for Factor 1. Also apparent from the results is that generally the scores for Factor 1 seemed to increase with age. For Factor 2, 3, 4, and 5 there is no significant difference in achievement when participants are grouped according to age and gender. The results of the tests for difference in achievement when the research participants are grouped according to age and length of service, do not indicate that there is any significant difference in average achievement between the groups. / Thesis (PhD)--University of Pretoria, 2009. / Human Resource Management / unrestricted
112

Untersuchungen zur Risikominimierungstechnik Stealth Computing für verteilte datenverarbeitende Software-Anwendungen mit nutzerkontrollierbar zusicherbaren Eigenschaften

Spillner, Josef 18 December 2015 (has links)
Die Sicherheit und Zuverlässigkeit von Anwendungen, welche schutzwürdige Daten verarbeiten, lässt sich durch die geschützte Verlagerung in die Cloud mit einer Kombination aus zielgrößenabhängiger Datenkodierung, kontinuierlicher mehrfacher Dienstauswahl, dienstabhängiger optimierter Datenverteilung und kodierungsabhängiger Algorithmen deutlich erhöhen und anwenderseitig kontrollieren. Die Kombination der Verfahren zu einer anwendungsintegrierten Stealth-Schutzschicht ist eine notwendige Grundlage für die Konstruktion sicherer Anwendungen mit zusicherbaren Sicherheitseigenschaften im Rahmen eines darauf angepassten Softwareentwicklungsprozesses.:1 Problemdarstellung 1.1 Einführung 1.2 Grundlegende Betrachtungen 1.3 Problemdefinition 1.4 Einordnung und Abgrenzung 2 Vorgehensweise und Problemlösungsmethodik 2.1 Annahmen und Beiträge 2.2 Wissenschaftliche Methoden 2.3 Struktur der Arbeit 3 Stealth-Kodierung für die abgesicherte Datennutzung 3.1 Datenkodierung 3.2 Datenverteilung 3.3 Semantische Verknüpfung verteilter kodierter Daten 3.4 Verarbeitung verteilter kodierter Daten 3.5 Zusammenfassung der Beiträge 4 Stealth-Konzepte für zuverlässige Dienste und Anwendungen 4.1 Überblick über Plattformkonzepte und -dienste 4.2 Netzwerkmultiplexerschnittstelle 4.3 Dateispeicherschnittstelle 4.4 Datenbankschnittstelle 4.5 Stromspeicherdienstschnittstelle 4.6 Ereignisverarbeitungsschnittstelle 4.7 Dienstintegration 4.8 Entwicklung von Anwendungen 4.9 Plattformäquivalente Cloud-Integration sicherer Dienste und Anwendungen 4.10 Zusammenfassung der Beiträge 5 Szenarien und Anwendungsfelder 5.1 Online-Speicherung von Dateien mit Suchfunktion 5.2 Persönliche Datenanalyse 5.3 Mehrwertdienste für das Internet der Dinge 6 Validierung 6.1 Infrastruktur für Experimente 6.2 Experimentelle Validierung der Datenkodierung 6.3 Experimentelle Validierung der Datenverteilung 6.4 Experimentelle Validierung der Datenverarbeitung 6.5 Funktionstüchtigkeit und Eigenschaften der Speicherdienstanbindung 6.6 Funktionstüchtigkeit und Eigenschaften der Speicherdienstintegration 6.7 Funktionstüchtigkeit und Eigenschaften der Datenverwaltung 6.8 Funktionstüchtigkeit und Eigenschaften der Datenstromverarbeitung 6.9 Integriertes Szenario: Online-Speicherung von Dateien 6.10 Integriertes Szenario: Persönliche Datenanalyse 6.11 Integriertes Szenario: Mobile Anwendungen für das Internet der Dinge 7 Zusammenfassung 7.1 Zusammenfassung der Beiträge 7.2 Kritische Diskussion und Bewertung 7.3 Ausblick Verzeichnisse Tabellenverzeichnis Abbildungsverzeichnis Listings Literaturverzeichnis Symbole und Notationen Software-Beiträge für native Cloud-Anwendungen Repositorien mit Experimentdaten / The security and reliability of applications processing sensitive data can be significantly increased and controlled by the user by a combination of techniques. These encompass a targeted data coding, continuous multiple service selection, service-specific optimal data distribution and coding-specific algorithms. The combination of the techniques towards an application-integrated stealth protection layer is a necessary precondition for the construction of safe applications with guaranteeable safety properties in the context of a custom software development process.:1 Problemdarstellung 1.1 Einführung 1.2 Grundlegende Betrachtungen 1.3 Problemdefinition 1.4 Einordnung und Abgrenzung 2 Vorgehensweise und Problemlösungsmethodik 2.1 Annahmen und Beiträge 2.2 Wissenschaftliche Methoden 2.3 Struktur der Arbeit 3 Stealth-Kodierung für die abgesicherte Datennutzung 3.1 Datenkodierung 3.2 Datenverteilung 3.3 Semantische Verknüpfung verteilter kodierter Daten 3.4 Verarbeitung verteilter kodierter Daten 3.5 Zusammenfassung der Beiträge 4 Stealth-Konzepte für zuverlässige Dienste und Anwendungen 4.1 Überblick über Plattformkonzepte und -dienste 4.2 Netzwerkmultiplexerschnittstelle 4.3 Dateispeicherschnittstelle 4.4 Datenbankschnittstelle 4.5 Stromspeicherdienstschnittstelle 4.6 Ereignisverarbeitungsschnittstelle 4.7 Dienstintegration 4.8 Entwicklung von Anwendungen 4.9 Plattformäquivalente Cloud-Integration sicherer Dienste und Anwendungen 4.10 Zusammenfassung der Beiträge 5 Szenarien und Anwendungsfelder 5.1 Online-Speicherung von Dateien mit Suchfunktion 5.2 Persönliche Datenanalyse 5.3 Mehrwertdienste für das Internet der Dinge 6 Validierung 6.1 Infrastruktur für Experimente 6.2 Experimentelle Validierung der Datenkodierung 6.3 Experimentelle Validierung der Datenverteilung 6.4 Experimentelle Validierung der Datenverarbeitung 6.5 Funktionstüchtigkeit und Eigenschaften der Speicherdienstanbindung 6.6 Funktionstüchtigkeit und Eigenschaften der Speicherdienstintegration 6.7 Funktionstüchtigkeit und Eigenschaften der Datenverwaltung 6.8 Funktionstüchtigkeit und Eigenschaften der Datenstromverarbeitung 6.9 Integriertes Szenario: Online-Speicherung von Dateien 6.10 Integriertes Szenario: Persönliche Datenanalyse 6.11 Integriertes Szenario: Mobile Anwendungen für das Internet der Dinge 7 Zusammenfassung 7.1 Zusammenfassung der Beiträge 7.2 Kritische Diskussion und Bewertung 7.3 Ausblick Verzeichnisse Tabellenverzeichnis Abbildungsverzeichnis Listings Literaturverzeichnis Symbole und Notationen Software-Beiträge für native Cloud-Anwendungen Repositorien mit Experimentdaten
113

Towards reliable implementation of digital filters / Vers une implémentation fiable des filtres numériques

Volkova, Anastasia 25 September 2017 (has links)
Dans cette thèse nous essayons d'améliorer l'évaluation de filtres numériques en nous concentrant sur la précision de calcul nécessaire.Ce travail est réalisé dans le contexte d'un générateur de code matériel/logiciel fiable pour des filtres numériques linéaires, en particulier filtres à Réponse Impulsionnelle Infinie (IIR). Avec ce travail, nous mettons en avant les problèmes liés à l'implémentation de filtres linéaires en arithmétique Virgule Fixe tout en prenant en compte la précision finie des calculs nécessaires à la transformation de filtres vers code. Ce point est important dans le cadre de filtres utilisés dans des systèmes embarqués critique comme les véhicules autonomes. Nous fournissons une nouvelle méthodologie pour l'analyse d'erreur lors de l'étude d'algorithmes de filtres linéaires du point de vue de l'arithmétique des ordinateurs. Au cœur de cette méthodologie se trouve le calcul fiable de la mesure Worst Case Peak Gain d'un filtre qui est la norme l1 de sa réponse impulsionnelle. L'analyse d'erreur proposée est basée sur la combinaison de techniques telles que l'analyse d'erreur en Virgule Flottante, l'arithmétique d'intervalles et les implémentations multi-précisions. Cette thèse expose également la problématique de compromis entre les coûts matériel (e.g. la surface) et la précision de calcul lors de l'implémentation de filtres numériques sur FPGA. Nous fournissons des briques de bases algorithmiques pour une solution automatique de ce problème. Finalement, nous intégrons nos approches dans un générateur de code pour les filtres au code open-source afin de permettre l'implémentation automatique et fiable de tout algorithme de filtre linéaire numérique. / In this thesis we develop approaches for improvement of the numerical behavior of digital filters with focus on the impact of accuracy of the computations. This work is done in the context of a reliable hardware/software code generator for Linear Time-Invariant (LTI) digital filters, in particular with Infinite Impulse Response (IIR). With this work we consider problems related to the implementation of LTI filters in Fixed-Point arithmetic while taking into account finite precision of the computations necessary for the transformation from filter to code. This point is important in the context of filters used in embedded critical systems such as autonomous vehicles. We provide a new methodology for the error analysis when linear filter algorithms are investigated from a computer arithmetic aspect. In the heart of this methodology lies the reliable evaluation of the Worst-Case Peak Gain measure of a filter, which is the l1 norm of its impulse response. The proposed error analysis is based on a combination of techniques such as rigorous Floating-Point error analysis, interval arithmetic and multiple precision implementations. This thesis also investigates the problematic of compromise between hardware cost (e.g. area) and the precision of computations during the implementation on FPGA. We provide basic brick algorithms for an automatic solution of this problem. Finally, we integrate our approaches into an open-source unifying framework to enable automatic and reliable implementation of any LTI digital filter algorithm.
114

Validation of the Tri-Choice Naming and Response Bias Measure

Huston, Chloe Ann 19 May 2021 (has links)
No description available.
115

Development of a reliable and time-efficient digital production process of encrypted intelligent keys : Embedded systems and software development

Almario Strömblad, Fredrik, Svensson, Primus January 2022 (has links)
Smart keys are increasing in popularity due to the many benefits they bring. Access control and overview have never been more efficient than it is today. This thesis project automates the digital production of a new line of keys. Automating this production process improves the production in scalability, reliability, and efficiency. This report includes background research on critical components, methodologies to solve presented subproblems, the results of this project, and a discussion providing insight into the possible benefits of using an automated development line. This automation’s core elements are an integrated circuit holding a microcontroller, hardware components, and a graphical user interface. This project results in an automated production process capable of producing smart keys more efficiently than today. A report containing the most common errors using this production process and suggestions to improve scalability, reliability, and efficiency further. / De många fördelar smarta nycklar bidrar med gör att de snabbt ökar i popularitet. Åtkomst och översikt över tillgång har aldrig varit så effektivt som det är idag. Detta examensarbete automatiserar den digitala produktionen av en ny serie elektroniska nycklar. Genom att automatisera den här processen kommer produktionens skalbarhet, pålitlighet och effektivitet att öka. Den här rapporten innehåller bakgrundsundersökningar gällande kritiska områden för utvecklingen, metoder för att lösa problemställningar, projektets resultat samt en diskussion gällande möjliga fördelar av produktionsautomatisering. Grundelementen i den här automatiseringen är ett kretskort med en mikrokontroller, hårdvarukomponenter samt ett grafiskt användargränssnitt. Projektet resulterar i en produktionsprocess kapabel att producera elektroniska nycklar effektivare än tidigare möjligt samt en rapport innehållande de mest förekommande produktionsfelen relaterade till den automatiserade processen. Rapporten innehåller även förslag på förbättringar för att ytterligare öka skalbarhet, pålitlighet och effektivitet.
116

Dependable messaging in wireless sensor networks

Zhang, Hongwei 13 September 2006 (has links)
No description available.
117

Robust Wireless Communications with Applications to Reconfigurable Intelligent Surfaces

Buvarp, Anders Martin 12 January 2024 (has links)
The concepts of a digital twin and extended reality have recently emerged, which require a massive amount of sensor data to be transmitted with low latency and high reliability. For low-latency communications, joint source-channel coding (JSCC) is an attractive method for error correction coding and compared to highly complex digital systems that are currently in use. I propose the use of complex-valued and quaternionic neural networks (QNN) to decode JSCC codes, where the complex-valued neural networks show a significant improvement over real-valued networks and the QNNs have an exceptionally high performance. Furthermore, I propose mapping encoded JSCC code words to the baseband of the frequency domain in order to enable time/frequency synchronization as well as to mitigate fading using robust estimation theory. Additionally, I perform robust statistical signal processing on the high-dimensional JSCC code showing significant noise immunity with drastic performance improvements at low signal-to-noise ratio (SNR) levels. The performance of the proposed JSCC codes is within 5 dB of the optimal performance theoretically achievable and outperforms the maximum likelihood decoder at low SNR while exhibiting the smallest possible latency. I designed a Bayesian minimum mean square error estimator for decoding high-dimensional JSCC codes achieving 99.96% accuracy. With the recent introduction of electromagnetic reconfigurable intelligent surfaces (RIS), a paradigm shift is currently taking place in the world of wireless communications. These new technologies have enabled the inclusion of the wireless channel as part of the optimization process. In order to decode polarization-space modulated RIS reflections, robust polarization state decoders are proposed using the Weiszfeld algorithm and an generalized Huber M-estimator. Additionally, QNNs are trained and evaluated for the recovery of the polarization state. Furthermore, I propose a novel 64-ary signal constellation based on scaled and shifted Eisenstein integers and generated using media-based modulation with a RIS. The waveform is received using an antenna array and decoded with complex-valued convolutional neural networks. I employ the circular cross-correlation function and a-priori knowledge of the phase angle distribution of the constellation to blindly resolve phase offsets between the transmitter and the receiver without the need for pilots or reference signals. Furthermore, the channel attenuation is determined using statistical methods exploiting that the constellation has a particular distribution of magnitudes. After resolving the phase and magnitude ambiguities, the noise power of the channel can also be estimated. Finally, I tune an Sq-estimator to robustly decode the Eisenstein waveform. / Doctor of Philosophy / This dissertation covers three novel wireless communications methods; analog coding, communications using the electromagnetic polarization and communications with a novel signal constellation. The concepts of a digital twin and extended reality have recently emerged, which require a massive amount of sensor data to be transmitted with low latency and high reliability. Contemporary digital communication systems are highly complex with high reliability at the expense of high latency. In order to reduce the complexity and hence latency, I propose to use an analog coding scheme that directly maps the sensor data to the wireless channel. Furthermore, I propose the use of neural networks for decoding at the receiver, hence using the name neural receiver. I employ various data types in the neural receivers hence leveraging the mathematical structure of the data in order to achieve exceptionally high performance. Another key contribution here is the mapping of the analog codes to the frequency domain enabling time and frequency synchronization. I also utilize robust estimation theory to significantly improve the performance and reliability of the coding scheme. With the recent introduction of electromagnetic reconfigurable intelligent surfaces (RIS), a paradigm shift is currently taking place in the world of wireless communications. These new technologies have enabled the inclusion of the wireless channel as part of the optimization process. Therefore, I propose to use the polarization state of the electromagnetic wave to convey information over the channel, where the polarization is determined using a RIS. As with the analog codes, I also extensively employ various methods of robust estimation to improve the performance of the recovery of the polarization at the receiver. Finally, I propose a novel communications signal constellation generated by a RIS that allows for equal probability of error at the receiver. Traditional communication systems utilize reference symbols for synchronization. In this work, I utilize statistical methods and the known distributions of the properties of the transmitted signal to synchronize without reference symbols. This is referred to as blind channel estimation. The reliability of the third communications method is enhanced using a state-of-the-art robust estimation method.
118

Ultra-reliable Low-latency, Energy-efficient and Computing-centric Software Data Plane for Network Softwarization

Xiang, Zuo 05 October 2022 (has links)
Network softwarization plays a significantly important role in the development and deployment of the latest communication system for 5G and beyond. A more flexible and intelligent network architecture can be enabled to provide support for agile network management, rapid launch of innovative network services with much reduction in Capital Expense (CAPEX) and Operating Expense (OPEX). Despite these benefits, 5G system also raises unprecedented challenges as emerging machine-to-machine and human-to-machine communication use cases require Ultra-Reliable Low Latency Communication (URLLC). According to empirical measurements performed by the author of this dissertation on a practical testbed, State of the Art (STOA) technologies and systems are not able to achieve the one millisecond end-to-end latency requirement of the 5G standard on Commercial Off-The-Shelf (COTS) servers. This dissertation performs a comprehensive introduction to three innovative approaches that can be used to improve different aspects of the current software-driven network data plane. All three approaches are carefully designed, professionally implemented and rigorously evaluated. According to the measurement results, these novel approaches put forward the research in the design and implementation of ultra-reliable low-latency, energy-efficient and computing-first software data plane for 5G communication system and beyond.
119

Rigorous defect control and the numerical solution of ordinary differential equations

Ernsthausen, John+ 10 1900 (has links)
Modern numerical ordinary differential equation initial-value problem (ODE-IVP) solvers compute a piecewise polynomial approximate solution to the mathematical problem. Evaluating the mathematical problem at this approximate solution defines the defect. Corless and Corliss proposed rigorous defect control of numerical ODE-IVP. This thesis automates rigorous defect control for explicit, first-order, nonlinear ODE-IVP. Defect control is residual-based backward error analysis for ODE, a special case of Wilkinson's backward error analysis. This thesis describes a complete software implementation of the Corless and Corliss algorithm and extensive numerical studies. Basic time-stepping software is adapted to defect control and implemented. Advances in software developed for validated computing applications and advances in programming languages supporting operator overloading enable the computation of a tight rigorous enclosure of the defect evaluated at the approximate solution with Taylor models. Rigorously bounding a norm of the defect, the Corless and Corliss algorithm controls to mathematical certainty the norm of the defect to be less than a user specified tolerance over the integration interval. The validated computing software used in this thesis happens to compute a rigorous supremum norm. The defect of an approximate solution to the mathematical problem is associated with a new problem, the perturbed reference problem. This approximate solution is often the product of a numerical procedure. Nonetheless, it solves exactly the new problem including all errors. Defect control accepts the approximate solution whenever the sup-norm of the defect is less than a user specified tolerance. A user must be satisfied that the new problem is an acceptable model. / Thesis / Master of Science (MSc) / Many processes in our daily lives evolve in time, even the weather. Scientists want to predict the future makeup of the process. To do so they build models to model physical reality. Scientists design algorithms to solve these models, and the algorithm implemented in this project was designed over 25 years ago. Recent advances in mathematics and software enabled this algorithm to be implemented. Scientific software implements mathematical algorithms, and sometimes there is more than one software solution to apply to the model. The software tools developed in this project enable scientists to objectively compare solution techniques. There are two forces at play; models and software solutions. This project build software to automate the construction of the exact solution of a nearby model. That's cool.
120

Are YouTube videos on cutaneous squamous cell carcinoma a useful and reliable source for patients?

Reinhardt, Lydia, Steeb, Theresa, Harlaß, Matthias, Brütting, Julia, Meier, Friedegund, Berking, Carola 21 May 2024 (has links)
A variety of new treatment options for skin cancer patients drives the need for information and education, which is increasingly met by videos and websites [1, 2]. However, distinguishing between high- and low-quality content becomes more difficult as the number of videos increases. Recently, videos addressing patients with melanoma or basal cell carcinoma (BCC) were found to be of predominantly mediocre quality and poor reliability [3, 4]. Until now, no evaluation of videos on cutaneous squamous cell carcinoma (cSCC) has been performed. Furthermore, no patient guideline currently exists for this entity [5–7]. Therefore, we aimed to systematically identify and evaluate videos on cSCC, the worldwide second most common type of skin cancer after BCC [8]. Our results will contribute to shared decision-making and help physicians and patients to select high-quality videos.

Page generated in 0.1301 seconds