• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 13
  • 10
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 109
  • 18
  • 17
  • 15
  • 14
  • 13
  • 13
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A Dual Dielectric Approach for Performance Aware Reduction of Gate Leakage in Combinational Circuits

Mukherjee, Valmiki 05 1900 (has links)
Design of systems in the low-end nanometer domain has introduced new dimensions in power consumption and dissipation in CMOS devices. With continued and aggressive scaling, using low thickness SiO2 for the transistor gates, gate leakage due to gate oxide direct tunneling current has emerged as the major component of leakage in the CMOS circuits. Therefore, providing a solution to the issue of gate oxide leakage has become one of the key concerns in achieving low power and high performance CMOS VLSI circuits. In this thesis, a new approach is proposed involving dual dielectric of dual thicknesses (DKDT) for the reducing both ON and OFF state gate leakage. It is claimed that the simultaneous utilization of SiON and SiO2 each with multiple thicknesses is a better approach for gate leakage reduction than the conventional usage of a single gate dielectric (SiO2), possibly with multiple thicknesses. An algorithm is developed for DKDT assignment that minimizes the overall leakage for a circuit without compromising with the performance. Extensive experiments were carried out on ISCAS'85 benchmarks using 45nm technology which showed that the proposed approach can reduce the leakage, as much as 98% (in an average 89.5%), without degrading the performance.
72

Síntese automática do leiaute de redes de transistores / Automatic layout synthesis of transistor networks

Ziesemer Junior, Adriel Mota January 2014 (has links)
Fluxo de síntese física baseado em standard cells tem sido utilizado na indústria e academia já há um longo período de tempo. Esta técnica é conhecida por ser bastante confiável e previsível uma vez que a mesma biblioteca de células, que foi devidamente validada e caracterizada, pode ser utilizada em diferentes projetos. No entanto, há uma série de otimizações lógicas e elétricas para problemas como: redução do consumo estático, circuitos assíncronos, SEU, NBTI, DFM, etc. que demandam a existência de células inexistentes em bibliotecas tradicionais. O projeto do leiaute destas células é usualmente feito a mão, o que pode dificultar a adoção e desenvolvimento de novas técnicas. Neste trabalho foi desenvolvido uma ferramenta para síntese automática do leiaute de redes de transistores chamada ASTRAN. Esta ferramenta suporta geração de células irrestrita quanto ao tipo da rede de transistores, incluindo lógica não-complementar, auxiliando no desenvolvimento de circuitos otimizados com menor área, número de transistores, conexões, contatos e vias. Através da utilização de uma nova metodologia para compactação do leiaute com programação linear mista com inteiros (MILP), foi possível compactar eficientemente as geometrias das células simultaneamente em duas dimensões, além de lidar com regras de projeto condicionais existentes em tecnologias abaixo de 130nm. ASTRAN conseguiu obter ganhos de produtividade uma ordem de grandeza superior ao do projeto exclusivamente manual, necessitando de apenas 12h para gerar células com até 44 transistores. Na comparação com standard cells comerciais - considerado o pior caso uma vez que o ganho estaria justamente em gerar células inexistentes nestas bibliotecas ou então utilizar a ferramenta para obter um leiaute inicial antes de otimizá-lo a mão - o resultado foi bastante próximo, sendo que 71% das células geradas com o ASTRAN apresentaram exatamente a mesma área. / Cell library-based synthesis flows for ASICs is one of the most used methodologies in both industry and academia for design of VLSI. It is known to be very reliable and predictable since the same cell library can be characterised and used in several different designs. However, there is a number of logic and electric optimizations for problems like: leakage reduction, asynchronous circuits, SEU, NBTI, DFM, etc. that demands the development of new cells. These cell layouts are usually designed by hand, which can limit the adoption and development of promising techniques. This work presents the development of a tool for automatic synthesis of transistor networks called ASTRAN. It can generate cell layout with unrestricted cell structure, including non-complementary logic cells, supporting the developing of optimized circuits with smaller number of transistors, connections, contacts and vias. By using a new methodology for simultaneous two-dimensional (2D) layout compaction using mixed integer linear programming (MILP), we were able to support most of the conditional design rules that applies to technology nodes bellow 130nm, while producing as result dense cell layouts. We demonstrate that ASTRAN can generate layouts with a very smal area overhead compared to commercial standard-cells and can improve productivity in one order of magnitude when compared to the manual design of the cells. Gates containing up to 44 transistors were generated in less than 12h of run-time.
73

Assessing the relationship between resting autonomic nervous system functioning, social anxiety, and emotional autobiographical memory retrieval

Smith, Brianna January 2018 (has links)
Thesis advisor: Elizabeth Kensinger / Individuals with social anxiety disorder (SAD) tend to have emotional memory biases in the encoding and retrieval of social memories. Research has shown reduced heart rate variability (HRV) in clinical populations suffering from anxiety, including social anxiety. Heightened sympathetic activation—as measured by the electrodermal activity (EDA)—has also been associated with anxiety disorders. The aim of the present study was to examine the relation between HRV, social anxiety, and re-experiencing of emotional autobiographical memories. 44 healthy young adults were recruited from the Boston College campus through SONA. Participants were given an online survey that instructed them to retrieve 40 specific events from the past in response to 40 socially relevant cues. For each event, participants were instructed to provide a brief narrative, make several ratings for the event (on a scale from 1-7), and indicate the specific emotions they experienced both at the time of retrieval and of the event. Approximately one month after the completion of the memory survey, participants engaged in a 2-hour memory retrieval session while undergoing psychophysiological monitoring (heart rate, skin conductance, and respiration). Following the retrieval task, participants completed self-report questionnaires of social anxiety symptom severity and trait emotion regulation strategy (i.e., tendency to reappraise or suppress emotions). The present study found that positive memories had higher re-experiencing ratings as compared to negative memories. Contrary to the original study hypothesis, however, there was no significant interaction between average re-experiencing (or arousal) ratings of positive or negative social autobiographical memories and SAD likelihood. A nonlinear, cubic relationship was found between one of three metrics of HRV and social anxiety symptom severity. A significant effect was found between skin conductance and SAD likelihood, which was likely driven by an almost significance difference in skin conductance between the SAD unlikely and the SAD very probable groups; these findings provide further insight into the relationship between autonomic nervous system (ANS) functioning and social anxiety. Further, the present results suggest the intriguing possibility that there may be a nonlinear relationship between HRV and severity of social anxiety. Future research with a larger sample size is needed to corroborate these findings. / Thesis (BS) — Boston College, 2018. / Submitted to: Boston College. College of Arts and Sciences. / Discipline: Departmental Honors. / Discipline: Psychology.
74

Printed Circuit Board Design and Layout for Hobbyists, Engineers, and Students

Derrenbacher, Michael A 01 December 2021 (has links) (PDF)
Printed Circuit Boards (PCBs) are a ubiquitous element of virtually every electronic system manufactured world-wide. It is not a stretch of the imagination to say that if it’s electronic, there is a PCB in it. PCBs are necessary tools for electronics work, and tools need to have instructions. For better or worse, PCB knowledge is a deep and wide ocean. There is much to cover for even a surface level understanding, and there are deep areas rich in technical expertise. Navigating the ocean of knowledge is treacherous; common knowledge of yore can be downright dubious now. PCB manufacturing and electronics as a whole have seen incredible developments in the past few decades, and knowledge once true may be outdated. At the same time there is a downpour of new techniques to use and challenges to face. The storm of information deepens the sea and can make it seem impossible to get anywhere without getting utterly lost. There are islands of knowledge out there hiding in books and papers and websites, but no guide to get anywhere. This thesis aims to guide the reader through the sea of information and provides a map that charts the shallows of beginner knowledge, into the deep depths of advanced design, of how and where to learn more. This thesis serves as an aiding means through the exciting and vast world of PCB design and layout.
75

Architectural Synthesis Techniques for Design of Correct and Secure ICs

Sundaresan, Vijay January 2008 (has links)
No description available.
76

The EU as a Security Actor - A Comparative Study of the EU & NATO between 2006 and 2014

Marshall, Alexander January 2017 (has links)
NATO has provided security for the Western Hemisphere for more than half a century now and there is little doubt that it is one of the most successful security alliances the world has ever known. However, after the end of the Cold War, its future become increasingly uncertain, thus leaving space for other another security actor: the EU. During the last two decades, the EU became more active in security matters and even launched its own, first ever anti-piracy and peacekeeping operations, despite a strong NATO presence in the same areas, at the same time. We will take a step back from these specific cases and approach the question of: To what extent, if any, has the EU developed into a security actor which is similar NATO? This question has been approached by constructing a deductive mixed methods study of a longitudinal design, in which we have compared the security regimes of the EU and NATO, and the military expenditures of the two organisations. The results of this study were that: the EU has, in fact, developed into a security actor, but it aligns more closely with the neoliberal institutionalist notion of a security institution.
77

電子設計自動化技術對台灣半導體產業價值網的影響 / The Impact of EDA Technology to Taiwan Semiconductor Industry Value Net

林毓柔 Unknown Date (has links)
台灣半導體產業由於產業群聚效應促成產業的興盛,2005年台灣整體的半導體產業產值已達新台幣一兆一千億元以上,更創造科學園區十萬員工的產業族群,而由於整體半導體產業的基礎深厚,台灣半導體產業在全球半導體產業可說是具有舉足輕重的地位,有著從上而下完整的半導體產業供應鏈,相當具有產業發展的優勢。 電子設計自動化技術可說是IC產業的源頭,但是在EDA產業裡,(Electronic Design Automation 電子設計自動化; 以下簡稱EDA),只有少數全球性的EDA廠商將研發資源投注在台灣;國內半導體產業賴以設計晶片研發的EDA工具幾乎完全掌握在外商手裡,對台灣半導體產業的整體發展實屬不利。 本研究利用價值鏈理論,來分析半導體產業各業者之間的互動關係與重要的價值創新活動,並利用價值網理論發展出價值網的動態模型,藉由動態價值網中各個廠商間所提供的價值分析,來瞭解EDA產業與半導體產業間的互動行為與競合關係,並分析EDA技術創新對於半導體產業價值網的影響,同時本研究發現,晶圓代工公司正積極扮演在半導體產業價值網中價值整合者的角色。 本研究的貢獻在於經由分析EDA產業與技術,得知EDA技術對半導體產業價值網有顯著的影響,首先是對IC設計公司的創新研發能力、成本控制能力、進入市場時機、合作網路關係、保護智慧財產等關鍵因素的價值創新有顯著的正面影響。再者對晶圓製造公司的創新研發能力、創造市場價值、成本控制能力、進入市場時機、合作網路關係、提升顧客服務等關鍵因素的價值創新均有非常顯著的正面影響。由於本研究歸納出價值網的動態模型,後續研究者可以利用動態價值網的模型,來分析產業價值網的動態變化。 / The prosperity of Taiwan semiconductor industry is facilitated by the industry cluster effect. In 2005, the total Taiwan semiconductor industry’s value had amounted to above 1.1 trillion NTD and IC industry creates one hundred thousand jobs opportunity in Science Park. Built on a structure that emphasizes horizontal division and vertical integration, the IC industry has delivered an economic miracle to Taiwan. Because Taiwan semiconductor industry has a well organized infrastructure and a complete supply chain, it plays an very important role in worldwide semiconductor industry with superiority. We may say that EDA (Electronic Design Automation; hereafter refers as EDA) technology is the beginning of IC industry. But in EDA industry, only few global EDA companies deployed R&D resources in Taiwan. The EDA tools which Taiwan semiconductor companies rely on developing IC design are almost completely being grasped in foreign EDA companies. This situation is very disadvantageous to Taiwan IC industry. Therefore, Taiwan government proclaimed that developing EDA talents and products will be the first priority plan in "National SoC (System on Chip) Program". This Program hopes to integrate EDA software, and to provide an outstanding design environment for the use of global systems design firms. This research is focusing on three major question groups as following: 1. How is the interaction among semiconductor industry companies in Taiwan IC industry value chain? What are important value creation activities among enterprises in Taiwan IC industry? 2. What is the roadmap of EDA technology? How is the EDA industry developing? 3. What is the influence of EDA technology regarding to the semiconductor industry value net? What are the interactions and relations between EDA industry and Taiwan semiconductor industry? What is the impact of EDA technology to the value creations of Taiwan semiconductor industry dynamic value net? First, this research uses Value Chain Theory to analyze the interaction and value creation activities among Taiwan semiconductor industry companies. Secondly, this research develops a “Dynamic Value Net Model” from Value Net Theory then to analyze Taiwan semiconductor industry. Third, this research analyzes the affiliation between each players in Taiwan IC industry dynamic value net and the interaction and co-opetition relationship between EDA vendors and semiconductor companies. Moreover, this research analyzes the influence of EDA technology innovation regarding Taiwan IC industry value net. There are four major findings in this research as below: 1. EDA Play an Important Role in IC Industry This research points out that EDA technology plays a very important role in IC industry, as it shows in Figure A-1. EDA is a necessary technology for IC design and PCB industry. The EDA software industry is located the most upstream position in IC design industry and IC manufacturing industry value chain. Through EDA technology, we may reduce the IC design cycle time and raise IC manufacturing yield rate which can enhance IC industry competitive advantage. 2. The Co-opetition Relationship in Taiwan IC Industry Value Net This research analyzes the IC industry co-opertition relationship in Taiwan IC industry value net. This research figures out the existing complicate co-opertition relationship including “customer-supplier” relations, “complementor” relations, “competition” relations between each players in Taiwan IC industry value net. 3. Taiwan IC industry Dynamic Value Net Model Analysis This research analyzes the interactions among EDA vendors, IC design companies and Foundries in Taiwan semiconductor industry value net through dynamic value net model analysis. This research discovers that Foundries are acting as value integrators in Taiwan IC industry value net aggressively. There are four major value creation activities in the value net: (1) e-Service. (2) Provide “IC design reference flow”, including DFM (Design for Manufacturing) support. (3) Build EDA alliance to provide design support. (4) CyberShuttle. 4. Impact of EDA Technology to Taiwan IC Industry Value Net The contribution of this research is acknowledging that EDA technology has positive influence to semiconductor industry value net by analyzing EDA industry and technology. First, to the IC design companies, EDA technology has positive influence to R&D capability, cost control capability, active market entrance capability, cooperation network relationship and intellectual property protection. Furthermore, to Foundries, EDA technology has positive influence to R&D capability, market value creation, cost control capability, active market entrance capability, cooperation network relationship and customer service value. Because this research induces the dynamic value net model, the following researchers may use the model to analyze the dynamic change in any industry value net if applicable. This research suggests that Taiwan IC industry should establish an outstanding design environment and services for global systems design firms, especially EDA software. These measures enable Taiwan to maintain its semiconductor manufacturing lead and grow the crucial design and design service business.
78

Psychophysiologische Untersuchung mentaler Beanspruchung in simulierten Mensch-Maschine-Interaktionen

Ribback, Sven January 2003 (has links)
In der vorliegenden Untersuchung wurde ein arbeitspsychologisches Problem thematisiert, dass in Mensch-Maschine-Systemen auftritt. <br /> In Mensch-Maschine-Systemen werden Informationen in kodierter Form ausgetauscht. Diese inhaltlich verkürzte Informationsübertragung hat den Vorteil, keine lange Zustandsbeschreibung zu benötigen, so dass der Mensch auf die veränderten Zustände schnell und effizient reagieren kann. Dies wird aber nur dann ermöglicht, wenn der Mensch die kodierten Informationen (Kodes) vorher erlernten Bedeutungen zuordnen kann. Je nach Art der kodierten Informationen (visuelle, akustische oder alphanumerische Signale) wurden Gestaltungsempfehlungen für Kodealphabete entwickelt. <br /> Für Operateure resultiert die mentale Belastung durch Dekodierungsprozesse vor allem aus dem Umfang des Kodealphabetes (Anzahl von Kodezeichen), der wahrnehmungsmäßigen Gestaltung der Kodes und den Regeln über die Zuordnung von Bedeutungen zu Kodezeichen. <br /> <br /> Die Entscheidung über die Güte von Kodealphabeten geschieht in der Arbeitspsychologie in der Regel über Leistungsindikatoren. Dies sind üblicherweise die zur Dekodierung der Kodes benötigte Zeit und dabei auftretende Zuordnungsfehler. Psychophysiologische Daten werden oft nicht herangezogen.<br /> Fraglich ist allerdings, ob Zeiten und Fehler allein verlässliche Indikatoren für den kognitiven Aufwand bei Dekodierungsprozessen sind, da im hochgeübten Zustand bei gleichen Alphabetlängen, aber unterschiedlicher Kodezeichengestaltung sich häufig die mittleren Dekodierungszeiten zwischen Kodealphabeten nicht signifikant unterscheiden und Fehler überhaupt nicht auftreten. <br /> Die in der vorliegenden Arbeit postulierte Notwendigkeit der Ableitung von Biosignalen gründet sich auf die Annahme, dass mit ihrer Hilfe zusätzliche Informationen über die mentale Beanspruchung bei Dekodierungsprozessen gewonnen werden können, die mit der Erhebung von Leistungsdaten nicht erfasst werden. Denn gerade dann, wenn sich die Leistungsdaten zweier Kodealphabete nicht unterscheiden, können psychophysiologische Daten unterschiedliche Aspekte mentaler Beanspruchung erfassen, die mit Hilfe von Leistungsdaten nicht bestimmt werden können. <br /> Daher wird in Erweiterung des etablierten Untersuchungsansatzes vorgeschlagen, Biosignale als dritten Datenbereich, neben Leistungsdaten und subjektiven Daten mentaler Beanspruchung, abzuleiten, um zusätzliche Informationen über die mentale Beanspruchung bei Dekodierungsprozessen zu erhalten.<br /> Diese Annahme sollte mit Hilfe der Ableitung von Biosignalen überprüft werden. <br /> <br /> Der Begriff mentaler Beanspruchung wird in der bisherigen Literatur nur unzureichend definiert und differenziert. Daher wird zur Untersuchung dieses Konzepts, die wissenschaftliche Literatur berücksichtigend, ein erweitertes Modell mentaler Beanspruchung vorgestellt.<br /> Dabei wird die mentale Beanspruchung abgegrenzt von der emotionalen Beanspruchung. Mentale Beanspruchung wird weiterhin unterschieden in psychomotorische, perzeptive und kognitive Beanspruchung. Diese Aspekte mentaler Beanspruchung werden jeweils vom psychomotorischen, perzeptiven oder kognitiven Aufwand der zu bearbeitenden Aufgabe ausgelöst.<br /> <br /> In der vorliegenden Untersuchung wurden zwei zentrale Fragestellungen untersucht:<br /> Einerseits wurde die Analyse der anwendungsbezogenen Frage fokussiert, inwieweit psychophysiologische Indikatoren mentaler Beanspruchung über die Leistungsdaten (Dekodierungszeiten und Fehleranzahl) hinaus, zusätzliche Informationen zur Bestimmung der Güte von Kodealphabeten liefern. <br /> Andererseits wurde der Forschungsaspekt untersucht, inwieweit psychophysiologische Indikatoren mentaler Beanspruchung die zur Dekodierung notwendigen perzeptiven und kognitiven Aspekte mentaler Beanspruchung differenzieren können. Emotionale Beanspruchung war nicht Gegenstand der Analysen, weshalb in der Operationalisierung versucht wurde, sie weitgehend zu vermeiden. Psychomotorische Beanspruchung als dritter Aspekt mentaler Beanspruchung (neben perzeptiver und kognitiver Beanspruchung) wurde für beide Experimentalgruppen weitgehend konstant gehalten.<br /> <br /> In Lernexperimenten hatten zwei anhand eines Lern- und Gedächtnistests homogenisierte Stichproben jeweils die Bedeutung von 54 Kodes eines Kodealphabets zu erwerben. Dabei wurde jeder der zwei unahbhängigen Stichproben ein anderes Kodealphabet vorgelegt, wobei sich die Kodealphabete hinsichtlich Buchstabenanzahl (Kodelänge) und anzuwendender Zuordnungsregeln unterschieden. Damit differierten die Kodealphabete im perzeptiven und kognitiven Aspekt mentaler Beanspruchung.<br /> Die Kombination der Abkürzungen entsprach den in einer Feuerwehrleitzentrale verwendeten (Kurzbeschreibungen von Notfallsituationen). In der Lernphase wurden den Probanden zunächst die Kodealphabete geblockt mit ihren Bedeutungen präsentiert. <br /> Anschließend wurden die Kodes (ohne deren Bedeutung) in sechs aufeinanderfolgenden Prüfphasen randomisiert einzeln dargeboten, wobei die Probanden instruiert waren, die Bedeutung der jeweiligen Kodes in ein Mikrofon zu sprechen. <br /> Während des gesamten Experiments wurden, neben Leistungsdaten (Dekodierungszeiten und Fehleranzahl) und subjektiven Daten über die mentale Beanspruchung im Verlauf der Experimente, folgende zentralnervöse und peripherphysiologische Biosignale abgeleitet: Blutdruck, Herzrate, phasische und tonische elektrodermale Aktivität und Elektroenzephalogramm. Aus ihnen wurden zunächst 13 peripherphysiologische und 7 zentralnervöse Parameter berechnet, von denen 7 peripherphysiologische und 3 zentralnervöse Parameter die statistischen Voraussetzungen (Einschlusskriterien) soweit erfüllten, dass sie in die inferenzstatistische Datenanalyse einbezogen wurden.<br /> <br /> Leistungsdaten und subjektive Beanspruchungseinschätzungen der Versuchsdurchgänge wurden zu den psychophysiologischen Parametern in Beziehung gesetzt. Die Befunde zeigen, dass mittels der psychophysiologischen Daten zusätzliche Erkenntnisse über den kognitiven Aufwand gewonnen werden können.<br /> <br /> Als weitere Analyse wurden die Kodes post hoc in zwei neue Kodealphabete eingeteilt. Ziel dieser Analyse war es, die Unterschiede zwischen beiden Kodealphabeten zu erhöhen, um deutlichere reizbezogene psychophysiologische Unterschiede in den EEG-Daten zwischen den Kodealphabeten zu erhalten. Dazu wurde diejenigen, hinsichtlich ihrer Bedeutung, parallelen Kodes in beiden Kodealphabeten ausgewählt, die sich in der Dekodierungszeit maximal voneinander unterschieden. Eine erneute Analyse der EEG-Daten erbrachte jedoch keine Verbesserung der Ergebnisse.<br /> <br /> Drei Hauptergebnisse bezüglich der psychophysiologischen Parameter konnten festgestellt werden:<br /> Das erste Ergebnis ist für die psychophysiologische Methodik bedeutsam. Viele psychophysiologische Parameter unterschieden zwischen den Prüfphasen und zeigen damit eine hinreichende Sensitivität zur Untersuchung mentaler Beanspruchung bei Dekodierungsprozessen an. Dazu gehören die Anzahl der spontanen Hautleitwertsreaktionen, die Amplitude der Hautleitwertsreaktionen, das Hautleitwertsniveau, die Herzrate, die Herzratendifferenz und das Beta-2-Band des EEG. Diese Parameter zeigen einen ähnlichen Verlauf wie die Leistungsdaten. Dies zeigt, dass es möglich ist, die hier operationaliserte Art mentaler Beanspruchung in Form von Dekodierungsprozessen psychophysiologisch zu analysieren.<br /> <br /> Ein zweites Ergebnis betrifft die Möglichkeit, Unterschiede mentaler Beanspruchung zwischen beiden Gruppen psychophysiologisch abzubilden:<br /> Das Hautleitwertsniveau und das Theta-Frequenzband des Spontan-EEG zeigten Unterschiede zwischen beiden Stichproben von der ersten Prüfphase an. Diese Parameter indizieren unterschiedlichen kognitiven Aufwand in beiden Stichproben über alle Prüfphasen.<br /> <br /> Das wichtigste Ergebnis betrifft die Frage nach einem Informationsgewinn bei Einsatz psychophysiologischer Methoden zur Bewertung der Güte von Kodealphabeten: <br /> Einen tatsächlichen Informationsgewinn gegenüber den Leistungsdaten zeigte die Amplitude der elektrodermalen Aktivität und die Herzraten-Differenz an. Denn in den späteren Prüfphasen, wenn sich die Leistungsdaten beider Kodealphabete nicht mehr unterschieden, konnten unterschiedliche Ausprägungen dieser psychophysiologischen Parameter zwischen beiden Kodealphabeten verzeichnet werden. Damit konnten unterschiedliche Aspekte mentaler Beanspruchung in beiden Kodealphabeten in den späteren Prüfphasen erfasst werden, in denen sich die Leistungsdaten nicht mehr unterschieden. <br /> <br /> Alle drei Ergebnisse zeigen, dass es, trotz erheblichen technischen und methodischen Aufwands, sinnvoll erscheint, bei der Charakterisierung mentaler Belastungen und für die Gestaltung von Kodealphabeten auch psychophysiologische Daten heranzuziehen, da zusätzliche Informationen über den perzeptiven und kognitiven Dekodierungsaufwand gewonnen werden können. / In this study a problem from the work psychology was focussed, which appears in human-machine systems.<br /> In human-machine systems informations were exchanged as codes. Using this kind of shortened information transmission needs no long description of the system state, so that the operator can react to the changed system state in a quick and efficient way. This is possible only in this case, if the operator has learned the meaning of the codes before. For the different kinds of coded informations (visual, acoustic or alphanumeric signals) special recommendations for their design were developed.<br /> Mental workload caused by decoding processes resulting from the size of the code alphabet, the percepted design of the codes, and the rules about the allocation of code meanings.<br /> <br /> The decision about the validity of code alphabets in work psychology is normally made by indicators of performance, which are the decoding times and decoding mistakes. Nearly all studies do not refer to psychophysiological data. <br /> It is questioned, if times and mistakes alone are valid indicators for the cognitive cost, because in well learned state and for the same size of the code alphabet but different design of the codes, the decoding times between code alphabets are not significantly different, and mistakes do not appear.<br /> This study postulates a necessity for the registration of psychophysiological data, so that additionally informations, which are not included in the performance data, can be examined. If the performance data does not differ between two code alphabets, psychophysiological data measures different aspects of mental workload, which could not be detected by performance data. To enlarge the established approach, it is recommended to registrate biosignals as a third domain of data to get additional informations about decoding processes.<br /> These hypotheses should be verified by registration of biosignals.<br /> <br /> There are vague definitions and deficient differentiations of the concept of mental workload in the scientific publications. To examine mental workload an enlarged model of mental workload is presented. Mental workload is delimited from emotional strain. Furthermore mental workload is differentiated in psychomotoric, perceptive, and cognitive aspects. These aspects of mental workload are caused by the psychomotoric, perceptive, and cognitive cost, which are initiated by the assigned task.<br /> <br /> Two main questions were examined in this study. <br /> First question refers to applied research. Do psychophysiological indicators of mental workload provide more information about the validity of code alphabets than performance data?<br /> The second question refers to what extent psychophysiological indicators of mental workload necessary for the decoding process could differentiate the perceptive and cognitive aspects of mental workload. <br /> <br /> >The emotional strain was not the objective of this study, therefore it was excluded from the experimental design.<br /> Psychomotoric workload as the third aspect of mental workload was a constant value for both experimental samples.<br /> <br /> In two learning experiments two samples with identical habituational memory performance were instructed to learn the meaning of 54 codes of a code alphabet. Both samples was presented another code alphabet, which differed in the number of included letters and the allocation rules. Thus the two code alphabets differed in the perceptive and in the cognitive aspect of mental workload.<br /> The combination of abbreviations was comparable to those used in a fire station. In a learning phase the code alphabets were presented with their meanings. Afterwards the codes were presented without their meanings in six following tests phases. Subjects were instructed to answer in a microphone. <br /> During the whole experiment performance data, subjective data of perceived strain, and psychophysiological data were registrated. The psychophysiological data contained: blood pressure, heart rate, phasic and tonic electrodermal activity, and the EEG. Thirteen peripherphysiological and seven EEG parameters were extracted from these raw data. Seven peripherphysiological and three EEG parameters accomplished the statistical premises and were included to further statistical analysis.<br /> Performance data and subjective data were set in relation to the psychophysiological parameters. The outcomes showed that using psychophysiological data generate additional informations about the cognitive cost.<br /> <br /> For further analysis the code items were divided into two new code alphabets. The intention of this analysis was to maximize the difference between the two code alphabets to get more stimuli based psychophysiological differences in the EEG data. This analysis included those pairs of codes with identical meaning and maximum difference in their decoding time. This further analysis did not improve the outcomes. <br /> <br /> Three main outcomes in respect to the psychophysiological data were detected. <br /> The first one is an important outcome for psychophysiological methodology. Many psychophysiological parameters differ between the test phases and thus show a sufficient sensitivity to examine mental workload in decoding processes. The number of spontaneous electrodermal responses, the amplitude of electrodermal responses, the electrodermal level, the heart rate, the heart rate difference, and the beta-2 frequency band of the EEG belong to these parameters. These parameters show a similar distribution like performance data. This shows the possibility of the operationalized mental workload through decoding processes analysable with psychophysiological methods.<br /> A second outcome concerns the possibility to show differences in mental workload between both samples in psychophysiological parameters:<br /> The electrodermal level and the theta frequency band of the EEG showed differences between both samples beginning from the first test phase. These parameters indicate different cognitive cost in both samples in all test phases.<br /> <br /> The most important outcome regards to the profit of information by using psychophysiological methods to test the validity of code alphabets. The amplitude of the electrodermal responses and the heart rate difference shows a surplus of information compared to performance data. Thus in later test phases, in which the performance data did no longer differ, different characteristics of psychophysiological parameters between both code alphabets were registrated. Therefore different aspects of mental workload could be quantified.<br /> <br /> All three outcomes showed that, nevertheless of the considerable technical and methodological expenditure, it is reasonable to use psychophysiological data to design code alphabets, because it supplies additional information about the perceptional and cognitive cost of the decoding processes.
79

Using data mining to increase controllability and observability in functional verification

Farkash, Monica C. 10 February 2015 (has links)
Hardware verification currently takes more than 50% of the whole verification time. There is a sustained effort to improve the efficiency of the verification process, which in the past helped deliver a large variety of supporting tools. The past years though did not see any major technology change that would bring the improvements that the process really needs (H. Foster 2013) (Wilson Research Group 2012). The existing approach to verification does not provide that type of qualitative jump anymore. This work is introducing a new tactic, providing a modern alternative to the existing approach to the verification problem. The novel approach I use in this research has the potential of significantly improve the process, way beyond incremental changes. It starts with acknowledging the huge amounts of data that follows the hardware development process from inception to the final product and in considering the data not as a quantitative by-product but as a qualitative supply of information on which we can develop a smarter verification. The approach is based on data already generated throughout the process currently used by verification engineers to zoom into the details of different verification aspects. By using existing machine learning approaches we can zoom out and use the same data to extract information, to gain knowledge that we can use to guide the verification process. This approach allows an apparent lack of accuracy introduced by data discovery, to achieve the overall goal. The latest advancements in machine learning and data mining offer a base of a new understanding and usage of the data that is being passed through the process. This work takes several practical problems for which the classical verification process reached a roadblock, and shows how the new approach can provide a jump in productivity and efficiency of the verification process. It focuses on four different aspects of verification to prove the power of this new approach: reducing effort redundancy, guiding verification to areas that need it first, decreasing time to diagnose, and designing tests for coverage efficiency. / text
80

Messagingová infrastruktura a produktová analýza trhu / Message infrastructure and market analysis

Klimeš, Ivo January 2008 (has links)
There are considered modern messaging architectonical concepts SOA and EDA in this diploma thesis. There are presented the elementary principals of functioning of these paradigms and principals are given into the wider context with the business processes and IT Governance. The aim of this thesis is to compare two preselected software solutions of the operational monitoring. There is always one solution per architectonical style and predefined comparative criteria. This thesis is divided into five consequential parts. The first part is focused on the putting the modern architectures into the historical context. The historical context is the way out for the modern architectonical styles. The part afterwards is closely focused on the concepts of SOA and EDA, and also on the comparison of the mentioned architectonical styles. There are put the concepts into the connection with business processes and maturity models In the next chapters. That all together has a influence on the successful implementation and governance. The chapters continuously flow into the last theoretical part of the thesis, IT Governance. There are described all the elements connected with the successful IT systems operating based on the paradigm SOA or EDA. The context in the practical part is link to the all these previous chapters. There are selected and described two software solutions in the practical part of this thesis. These solutions are then compared by the predefined criteria. The conclusion summarizes all the knowledge acquired during the paradigm comparison and there are also summarized knowledge acquired during the comparison of selected monitoring products.

Page generated in 0.0295 seconds