• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 13
  • 1
  • 1
  • Tagged with
  • 70
  • 70
  • 36
  • 26
  • 15
  • 15
  • 15
  • 13
  • 13
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Análise episódica sobre um projeto de atualização de ERP em um operador logístico: levantamento das principais lições aprendidas segundo a perspectiva de participantes-chave

Carvalho, Julio Cesar Gusmão 18 December 2014 (has links)
Submitted by Marcia Silva (marcia@latec.uff.br) on 2015-10-28T16:01:24Z No. of bitstreams: 1 Dissert JULIO CESAR GUSMÃO.pdf: 1702527 bytes, checksum: e46270c819970efde18b340d5c8e5ac5 (MD5) / Made available in DSpace on 2015-10-28T16:01:24Z (GMT). No. of bitstreams: 1 Dissert JULIO CESAR GUSMÃO.pdf: 1702527 bytes, checksum: e46270c819970efde18b340d5c8e5ac5 (MD5) Previous issue date: 2014-12-18 / De forma geral, a ambiência dos projetos complexos é repleta de problemas, tanto de ordem técnica, quanto gerencial e comportamental. Os projetos de ERP, em especial, são empreendimentos igualmente complexos, nos quais empresas contratantes e fornecedoras precisam estar intimamente alinhadas visando à minimização das não conformidades e a majoração das chances de sucesso da iniciativa. O presente estudo, analisa por meio de uma pesquisa episódica em um projeto de atualização de ERP realizado em um operador logístico, ocorrido entre Fevereiro e Novembro de 2013. O estudo tem como objetivo compreender, sob o prisma de participantes-chave do referido projeto, os pontos críticos e as lições aprendidas nesse empreendimento. Em termos teóricos, o estudo está amparado por revisão da literatura sobre boas práticas de gerenciamento de projetos e projetos de implantação e manutenção de ERP. Como resultados esperados, almeja-se o levantamento das principais lições aprendidas nesse projeto, de forma a contribuir para o avanço da literatura específica de projetos de manutenção ERP, assim como apoiar gerentes de projetos para a condução exitosa de projetos futuros. / Overall, the ambience of complex projects is fraught with problems, both technical, as managerial and behavioral. ERP projects, in particular, are equally complex projects in which contractors and suppliers need to be closely aligned in order to minimize nonconformities and to increase the chances of success of the initiative. This study examines via an episodic research a project update ERP realized in a logistics operator, occurred between February and November 2013. The study aims to understand from the perspective of key participants of this project, critical points and the lessons learned from this enterprise. Theoretically, the study is supported by literature review of best practice project management and project implementation and maintenance of ERP. As expected results, we aimed to survey the principal lessons learned from this project, in order to contribute to the advancement of literature specific to ERP maintenance projects as well as supporting project managers for the successful conduct of future projects.
52

DECENTRALIZED PRICE-DRIVEN DEMAND RESPONSE IN SMART ENERGY GRID

Zibo Zhao (5930495) 14 January 2021 (has links)
<div> <div> <div> <p>Real-time pricing (RTP) of electricity for consumers has long been argued to be crucial for realizing the many envisioned benefits of demand flexibility in a smart grid. However, many details of how to actually implement a RTP scheme are still under debate. Since most of the organized wholesale electricity markets in the US implement a two-settlement mechanism, with day-ahead electricity price forecasts guiding financial and physical transactions in the next day and real-time ex post prices settling any real-time imbalances, it is a natural idea to let consumers respond to the day-ahead prices in real-time. However, if such an idea is not controlled properly, the inherent closed-loop operation may lead consumers to all respond in the same fashion, causing large swings of real-time demand and prices, which may jeopardize system stability and increase consumers’ financial risks. </p><p><br></p> <p>To overcome the potential uncertainties and undesired demand peak caused by “selfish” behaviors by individual consumers under RTP, in this research, we develop a fully decentralized price-driven demand response (DR) approach under game- theoretical frameworks. In game theory, agents usually make decisions based on their belief about competitors’ states, which needs to maintain a large amount of knowledge and thus can be intractable and implausible for a large population. Instead, we propose using regret-based learning in games by focusing on each agent’s own history and utility received. We study two learning mechanisms: bandit learning with incomplete information feedback, and low regret learning with full information feedback. With the learning in games, we establish performance guarantees for each individual agent (i.e., regret minimization) and the overall system (i.e., bounds on price of anarchy).</p><p><br></p></div></div></div><div><div><div> <p>In addition to the game-theoretical framework for price-driven demand response, we also apply such a framework for peer-to-peer energy trading auctions. The market- based approach can better incentivize the development of distributed energy resources (DERs) on demand side. However, the complexity of double-sided auctions in an energy market and agents’ bounded rationality may invalidate many well-established theories in auction design, and consequently, hinder market development. To address these issues, we propose an automated bidding framework based on multi-armed bandit learning through repeated auctions, and is aimed to minimize each bidder’s cumulative regret. We also use such a framework to compare market outcomes of three different auction designs. </p> </div> </div> </div>
53

Artificial Intelligence Guided In-Situ Piezoelectric Sensing for Concrete Strength Monitoring

Yen-Fang Su (11726888) 19 November 2021 (has links)
<p>Developing a reliable in-situ non-destructive testing method to determine the strength of in-place concrete is critical because a fast-paced construction schedule exposes concrete pavement and/or structures undergoing substantial loading conditions, even at their early ages. Conventional destructive testing methods, such as compressive and flexural tests, are very time-consuming, which may cause construction delays or cost overruns. Moreover, the curing conditions of the tested cylindrical samples and the in-place concrete pavement/structures are quite different, which may result in different strength values. An NDT method that could directly correlate the mechanical properties of cementitious materials with the sensing results, regardless of the curing conditions, mix design, and size effect is needed for the in-situ application.</p><p>The piezoelectric sensor-based electromechanical impedance (EMI) technique has shown promise in addressing this challenge as it has been used to both monitor properties and detect damages on the concrete structure. Due to the direct and inverse effects of piezoelectric, this material can act as a sensor, actuator, and transducer. This research serves as a comprehensive study to investigate the feasibility and efficiency of using piezoelectric sensor-based EMI to evaluate the strength of newly poured concrete. To understand the fundamentals of this method and enhance the durability of the sensor for in-situ monitoring, this work started with sensor fabrication. It has studied two types of polymer coating on the effect of the durability of the sensor to make it practical to be used in the field.</p><p>The mortar and concrete samples with various mix designs were prepared to ascertain whether the results of the proposed sensing technique were affected by the different mixtures. The EMI measurement and compressive strength testing methods (ASTM C39, ASTM C109) were conducted in the laboratory. The experimental results of mortar samples with different water-to-cement ratios (W/C) and two types of cement (I and III) showed that the correlation coefficient (R<sup>2</sup>) is higher than 0.93 for all mixes. In the concrete experiments, the correlation coefficient between the EMI sensing index and compressive strength of all mixes is higher than 0.90. The empirical estimation function was established through a concrete slab experiment. Moreover, several trial implementations on highway construction projects (I-70, I-74, and I-465) were conducted to monitor the real-time strength development of concrete. The data processing method and the reliable index of EMI sensing were developed to establish the regression model to correlate the sensing results with the compressive strength of concrete. It has been found that the EMI sensing method and its related statistical index can effectively reflect the compressive strength gain of in-place concrete at different ages.</p><p>To further investigate the in-situ compressive strength of concrete for large-scale structures, we conducted a series of large concrete slabs with the dimension of 8 feet × 12 feet × 8 inches in depth was conducted at outdoor experiments field to simulate real-world conditions. Different types of compressive strength samples, including cast-in-place (CIP) cylinder (4” × 6”) – (ASTM C873), field molded cylinder (4” × 8”) – (ASTM C39), and core drilled sample (4” × 8”) – (ASTM C42) were prepared to compare the compressive strength of concrete. The environmental conditions, such as ambient temperatures and relative humidity, were also recorded. The in-situ EMI monitoring of concrete strength was also conducted. The testing ages in this study were started from 6 hours after the concrete cast was put in place to investigate the early age results and continued up to 365 days (one year) later for long-term monitoring. The results indicate that the strength of the CIP sample is higher than the 4” x 8” molded cylinder , and that core drilled concrete is weaker than the two aforementioned. The EMI results obtained from the slab are close to those obtained from CIP due to similar curing conditions. The EMI results collected from 4 × 8-inch cylinder samples are lower than slab and CIP, which aligns with the mechanical testing results and indicates that EMI could capture the strength gain of concrete over time.</p><p>The consequent database collected from the large slab tests was used to build a prediction model for concrete strength. The Artificial Neuron Network (ANN) was investigated and experimented with to optimize the prediction of performances. Then, a sensitivity analysis was conducted to discuss and understand the critical parameters to predict the mechanical properties of concrete using the ML model. A framework using Generative Adversarial Network (GAN) based on algorithms was then proposed to overcome real data usage restrictions. Two types of GAN algorithms were selected for the data synthesis in the research: Tabular Generative Adversarial Networks (TGAN) and Conditional Tabular Generative Adversarial Networks (CTGAN). The testing results suggested that the CTGAN-NN model shows improved testing performances and higher computational efficiency than the TGAN model. In conclusion, the AI-guided concrete strength sensing and prediction approaches developed in this dissertation will be a steppingstone towards accomplishing the reliable and intelligent assessment of in-situ concrete structures.</p><br>
54

Quantifying Trust and Reputation for Defense against Adversaries in Multi-Channel Dynamic Spectrum Access Networks

Bhattacharjee, Shameek 01 January 2015 (has links)
Dynamic spectrum access enabled by cognitive radio networks are envisioned to drive the next generation wireless networks that can increase spectrum utility by opportunistically accessing unused spectrum. Due to the policy constraint that there could be no interference to the primary (licensed) users, secondary cognitive radios have to continuously sense for primary transmissions. Typically, sensing reports from multiple cognitive radios are fused as stand-alone observations are prone to errors due to wireless channel characteristics. Such dependence on cooperative spectrum sensing is vulnerable to attacks such as Secondary Spectrum Data Falsification (SSDF) attacks when multiple malicious or selfish radios falsify the spectrum reports. Hence, there is a need to quantify the trustworthiness of radios that share spectrum sensing reports and devise malicious node identification and robust fusion schemes that would lead to correct inference about spectrum usage. In this work, we propose an anomaly monitoring technique that can effectively capture anomalies in the spectrum sensing reports shared by individual cognitive radios during cooperative spectrum sensing in a multi-channel distributed network. Such anomalies are used as evidence to compute the trustworthiness of a radio by its neighbours. The proposed anomaly monitoring technique works for any density of malicious nodes and for any physical environment. We propose an optimistic trust heuristic for a system with a normal risk attitude and show that it can be approximated as a beta distribution. For a more conservative system, we propose a multinomial Dirichlet distribution based conservative trust framework, where Josang*s Belief model is used to resolve any uncertainty in information that might arise during anomaly monitoring. Using a machine learning approach, we identify malicious nodes with a high degree of certainty regardless of their aggressiveness and variations introduced by the pathloss environment. We also propose extensions to the anomaly monitoring technique that facilitate learning about strategies employed by malicious nodes and also utilize the misleading information they provide. We also devise strategies to defend against a collaborative SSDF attack that is launched by a coalition of selfish nodes. Since, defense against such collaborative attacks is difficult with popularly used voting based inference models or node centric isolation techniques, we propose a channel centric Bayesian inference approach that indicates how much the collective decision on a channels occupancy inference can be trusted. Based on the measured observations over time, we estimate the parameters of the hypothesis of anomalous and non-anomalous events using a multinomial Bayesian based inference. We quantitatively define the trustworthiness of a channel inference as the difference between the posterior beliefs associated with anomalous and non-anomalous events. The posterior beliefs are updated based on a weighted average of the prior information on the belief itself and the recently observed data. Subsequently, we propose robust fusion models which utilize the trusts of the nodes to improve the accuracy of the cooperative spectrum sensing decisions. In particular, we propose three fusion models: (i) optimistic trust based fusion, (ii) conservative trust based fusion, and (iii) inversion based fusion. The former two approaches exclude untrustworthy sensing reports for fusion, while the last approach utilizes misleading information. All schemes are analyzed under various attack strategies. We propose an asymmetric weighted moving average based trust management scheme that quickly identifies on-off SSDF attacks and prevents quick trust redemption when such nodes revert back to temporal honest behavior. We also provide insights on what attack strategies are more effective from the adversaries* perspective. Through extensive simulation experiments we show that the trust models are effective in identifying malicious nodes with a high degree of certainty under variety of network and radio conditions. We show high true negative detection rates even when multiple malicious nodes launch collaborative attacks which is an improvement over existing voting based exclusion and entropy divergence techniques. We also show that we are able to improve the accuracy of fusion decisions compared to other popular fusion techniques. Trust based fusion schemes show worst case decision error rates of 5% while inversion based fusion show 4% as opposed majority voting schemes that have 18% error rate. We also show that the proposed channel centric Bayesian inference based trust model is able to distinguish between attacked and non-attacked channels for both static and dynamic collaborative attacks. We are also able to show that attacked channels have significantly lower trust values than channels that are not– a metric that can be used by nodes to rank the quality of inference on channels.
55

EXPERIMENTS, DATA ANALYSIS, AND MACHINE LEARNING APPLIED TO FIRE SAFETY IN AIRCRAFT APPLICATIONS

Luke N Dillard (11825048) 11 December 2023 (has links)
<div>Hot surface ignition is a safety design concern for serval industries including mining, aviation, automotive, boilers, and maritime applications. Bleed air ducts, exhaust pipes, combustion liners, and machine tools that are operated at elevated temperatures may be a source of ignition that needs to be accounted for during design. An apparatus for the measurements of minimum hot surface ignition temperature (MHSIT) of 3 aviation fluids (Jet-A, Hydraulic Oil (MIL-PRF-5606) and Lubrication Oil (MIL-PRF-23699)) has been developed. This study expands a widely utilized database of values of MHSIT. The study will expand the current range of design parameters including air temperature, crossflow velocity, fluid temperature, global equivalence ratio, injection method, and the effects of pressure. The expanded data are utilized to continue the development of a physics-anchored data dependent system and machine learning model for the estimation of MHSIT.</div><div><br></div><div>The aviation industry, including Rolls Royce, currently use a database of MHSIT values resulting from experiments conducted in 1988 at the Air Force Research Laboratory (AFRL) within the Wright Patterson Air Force Base in Dayton, OH. Over the three decades since these experiments, the range of operating conditions have significantly broadened in most applications including high performance aircraft engines. For example, the cross-stream air velocities (V) have increased by a factor of two (from ~3.4 m/s to ~6.7 m/s). Expanding the known database to document MHSIT for a range of fuel temperatures (TF), air temperatures (TA), pressure (P) and air velocities (V) is of great interest to the aviation industry. MHSIT data for current aviation fluids such as Jet-A and MIL-PRF-23699 (lubrication oil) and their relation to the design parameters have recently been under investigation in a generic experimental apparatus. </div><div><br></div><div>The current work involves utilization of this generic experimental apparatus to further the understanding of MHSIT through the investigation of intermediate air velocities, global equivalence ratios, injection method, and the effects of pressure. This study investigates the effects of air velocity in a greater degree of granularity by utilizing 0.6 m/s increments. This is done to capture the uncertainty seen in MHSIT values above 3.0 m/s. Furthermore, this study also expands the understanding of the effects of injection method on the MHSIT value with the inclusion of spray injected lubrication oil (MIL-PRF-23699) and stream injected Jet-A. The effects of global equivalence ratio are examined for spray injected Jet-A by modulating the aviation fluid injection rate and the crossflow air velocity in tandem. </div><div><br></div><div>During previous experimental campaigns, it was found that MHSIT did not monotonically increase with crossflow air velocity as previously believed. This new finding inspired a set of experiments that found MHSIT in crossflow to have four proposed ignition regimes: conduction, convective cooling, turbulent mixing, and advection. The current study replicates the results from the initial set of experiments at new conditions and to determine the effects of surface temperature on the regimes. </div><div><br></div><div>The MHSIT of flammable liquids depends on several factors including leak type (spray or stream), liquid temperature, air temperature, velocity, and pressure. ASTM standardized methods for ignition are limited to stagnant and falling drops downward (autoignition) at atmospheric pressure (ASTM E659, ASTM D8211, and ASTM E1491) and at pressures from 218 to 203 kPa (ASTM G72). Past studies have shown that MHSIT decreases with increasing pressure, but the available databases lack results of extensive experimental investigation. Therefore, such data for pressures between 101 to 203 kPa are missing or inadequate. As such the generic experimental apparatus was modified to produce the 101 to 203 kPa air duct pressure levels representative of a typical turbofan engine. </div><div><br></div><div>Machine learning (ML) and deep learning (DL) have become widely available in recent years. Open-source software packages and languages have made it possible to implement complex ML based data analysis and modeling techniques on a wide range of applications. The application of these techniques can expedite existing models or reduce the amount of physical lab investigation time required. Three data sets were utilized to examine the effectiveness of multiple ML techniques to estimate experimental outcomes and to serve as a substitute for additional lab work. To achieve this complex multi-variant regressions and neural networks were utilized to create estimating models. The first data sets of interest consist of a pool fire experiment that measured the flame spread rate as a function of initial fuel temperature for 8 different fuels, including Jet-A, JP-5, JP-8, HEFA-50, and FT-PK. The second data set consists of hot surface ignition data for 9 fuels including 4 alternative piston engine fuels for which properties were not available. The third data set is the MHSIT data generated by the generic experimental apparatus during the investigations conducted to expand the understanding of minimum hot surface ignition temperatures. When properties were not available multiple imputation by chained equations (MICE) was utilized to estimate fluid properties. Training and testing data sets were split up to 70% and 30% of the respective data set being modeled. ML techniques were implemented to analyze the data and R-squared values as high as 92% were achieved. The limitation of machine learning models is also discussed along with the advantages of physics-based approaches. The current study has furthered the application of ML in combustion through use of the MHSIT database.</div>
56

<b>Safety and mobility improvement of mixed traffic using optimization- And Learning-based methods</b>

Runjia Du (9756128) 11 December 2023 (has links)
<p dir="ltr">Traffic safety and congestion are global concerns. Autonomous vehicles (AVs) are expected to enhance transportation safety and reduce congestion. However, achieving their full potential requires 100% market penetration, a challenging task. This study addresses key issues in mixed traffic environments, where human-driven vehicles (HDVs) and connected autonomous vehicles (CAVs) coexist. A number of critical questions persist: 1) inadequate exploration of human errors (errors originating from non-CAV sources) in mixed traffic; 2): limited focus on information selection and learning efficiency in network-level rerouting, particularly in highly dynamic environments; 3) inadequacy of personalized element driver inputs in motion-planning frameworks; 4) lack of consideration of user privacy concerns.</p><p dir="ltr">With the goal of advancing the existing knowledge in this field and shedding light on these matters, this dissertation introduces multiple frameworks. These frameworks leverage connectivity and automation to improve safety and mobility in mixed traffic, addressing various research levels, including local-level and network-level safety enhancement, as well as network-level and global-level mobility enhancement. With optimization- and learning-based methods implemented (Model Predictive Control, Deep Neural Network, Deep Reinforcement Learning, Transformer model and Federated Learning), frameworks introduced in this dissertation are expected to help highway agencies and vehicle manufacturers improve the safety and efficiency of traffic flow in the mixed-traffic era. Our research findings revealed increased crash-avoidance rates in critical situations, enhanced accuracy in predicting lane changes, improved dynamic rerouting within urban areas, and the implementation of effective data-sharing mechanisms with a focus on user privacy. This research underscores the potential of connectivity and automation to significantly enhance mixed-traffic safety and mobility.</p>
57

Semiparametric and Nonparametric Methods for Complex Data

Kim, Byung-Jun 26 June 2020 (has links)
A variety of complex data has broadened in many research fields such as epidemiology, genomics, and analytical chemistry with the development of science, technologies, and design scheme over the past few decades. For example, in epidemiology, the matched case-crossover study design is used to investigate the association between the clustered binary outcomes of disease and a measurement error in covariate within a certain period by stratifying subjects' conditions. In genomics, high-correlated and high-dimensional(HCHD) data are required to identify important genes and their interaction effect over diseases. In analytical chemistry, multiple time series data are generated to recognize the complex patterns among multiple classes. Due to the great diversity, we encounter three problems in analyzing those complex data in this dissertation. We have then provided several contributions to semiparametric and nonparametric methods for dealing with the following problems: the first is to propose a method for testing the significance of a functional association under the matched study; the second is to develop a method to simultaneously identify important variables and build a network in HDHC data; the third is to propose a multi-class dynamic model for recognizing a pattern in the time-trend analysis. For the first topic, we propose a semiparametric omnibus test for testing the significance of a functional association between the clustered binary outcomes and covariates with measurement error by taking into account the effect modification of matching covariates. We develop a flexible omnibus test for testing purposes without a specific alternative form of a hypothesis. The advantages of our omnibus test are demonstrated through simulation studies and 1-4 bidirectional matched data analyses from an epidemiology study. For the second topic, we propose a joint semiparametric kernel machine network approach to provide a connection between variable selection and network estimation. Our approach is a unified and integrated method that can simultaneously identify important variables and build a network among them. We develop our approach under a semiparametric kernel machine regression framework, which can allow for the possibility that each variable might be nonlinear and is likely to interact with each other in a complicated way. We demonstrate our approach using simulation studies and real application on genetic pathway analysis. Lastly, for the third project, we propose a Bayesian focal-area detection method for a multi-class dynamic model under a Bayesian hierarchical framework. Two-step Bayesian sequential procedures are developed to estimate patterns and detect focal intervals, which can be used for gas chromatography. We demonstrate the performance of our proposed method using a simulation study and real application on gas chromatography on Fast Odor Chromatographic Sniffer (FOX) system. / Doctor of Philosophy / A variety of complex data has broadened in many research fields such as epidemiology, genomics, and analytical chemistry with the development of science, technologies, and design scheme over the past few decades. For example, in epidemiology, the matched case-crossover study design is used to investigate the association between the clustered binary outcomes of disease and a measurement error in covariate within a certain period by stratifying subjects' conditions. In genomics, high-correlated and high-dimensional(HCHD) data are required to identify important genes and their interaction effect over diseases. In analytical chemistry, multiple time series data are generated to recognize the complex patterns among multiple classes. Due to the great diversity, we encounter three problems in analyzing the following three types of data: (1) matched case-crossover data, (2) HCHD data, and (3) Time-series data. We contribute to the development of statistical methods to deal with such complex data. First, under the matched study, we discuss an idea about hypothesis testing to effectively determine the association between observed factors and risk of interested disease. Because, in practice, we do not know the specific form of the association, it might be challenging to set a specific alternative hypothesis. By reflecting the reality, we consider the possibility that some observations are measured with errors. By considering these measurement errors, we develop a testing procedure under the matched case-crossover framework. This testing procedure has the flexibility to make inferences on various hypothesis settings. Second, we consider the data where the number of variables is very large compared to the sample size, and the variables are correlated to each other. In this case, our goal is to identify important variables for outcome among a large amount of the variables and build their network. For example, identifying few genes among whole genomics associated with diabetes can be used to develop biomarkers. By our proposed approach in the second project, we can identify differentially expressed and important genes and their network structure with consideration for the outcome. Lastly, we consider the scenario of changing patterns of interest over time with application to gas chromatography. We propose an efficient detection method to effectively distinguish the patterns of multi-level subjects in time-trend analysis. We suggest that our proposed method can give precious information on efficient search for the distinguishable patterns so as to reduce the burden of examining all observations in the data.
58

Aprendizagem baseada em problemas aplicada ao ensino de direito: Projeto exploratório na área de relações de consumo

Carlini, Angélica Luciá 27 November 2006 (has links)
Made available in DSpace on 2016-04-27T14:31:56Z (GMT). No. of bitstreams: 1 CED - Angelica Lucia Carlini.pdf: 694460 bytes, checksum: d1461bd01bd2765b4d446840e5684a1c (MD5) Previous issue date: 2006-11-27 / The work contemplates the experience accomplished with the application of the paradigm of learning based on problems, in two groups of Law School undergraduation students, in the period between June 2004 and June 2005, in the São Francisco University, in Bragança Paulista, concerning the study of consumption relations and the consumer's right. The experience was carried out aiming at answering to the query whether the paradigm of learning based on problems is possible of being used in teaching Law in Brazil and, whether this paradigm can mean a renewal in the teaching-learning relation both for teachers and for Law students. The methodological option used was that of qualitative research. The accomplished research comprised the bibliographical-research and the action-research and the used procedures were the participant-observation and the non-directive interview. The research rebuilds the historical path of the Law courses in Brazil, with the objective of drawing a backdrop for the reflection over the need of changes in the teaching-learning relation, which nowadays is still marked, in those courses, by little stimulating practices, like the prevailing use of the traditional expository class pattern, understood as the one where the teacher is the knowledge transmitter for the students who passively receive it. Besides, the Law teaching is also marked by an excessive attachment to the positivism, what results in the lack of room for the construction of a critical reflection about the Law science. The study analyzes the theoretical bases on which the paradigm of learning based on problems is built, and its possibility of being applied to the Law teaching in Brazil. The research discusses the relevant aspects of the developed experience, especially the problems built and introduced to the students, discussing the ways they have built the solutions. Moreover, it analyzes the students' performance in field works, with adolescents and tradesmen of Bragança Paulista, moment in which they taught to lay people relevant aspects of the consumer's right and of consumption relations / O trabalho reflete sobre a experiência realizada com a aplicação do paradigma da aprendizagem baseada em problemas em dois grupos de alunos de graduação em direito, no período de junho 2004 a junho de 2005, na Universidade São Francisco, em Bragança Paulista, no estudo de relações de consumo e direito do consumidor. A experiência foi feita buscando responder à questão se o paradigma da aprendizagem baseada em problemas é possível de ser utilizado no ensino de direito no Brasil e, se este paradigma pode significar uma renovação na relação ensino-aprendizagem tanto para os docentes como para os alunos de direito. A opção metodológica utilizada foi de pesquisa qualitativa. A pesquisa realizada abrangeu a pesquisa- bibliográfica e a pesquisa-ação e os procedimentos utilizados foram a observação-participante e a entrevista não-diretiva. A pesquisa reconstrói a trajetória histórica dos cursos de direito no Brasil, com o objetivo de desenhar um pano de fundo para a reflexão sobre a necessidade de mudanças na relação ensino-aprendizagem, que ainda hoje é marcada nesses cursos por práticas pouco estimulantes, como o uso prevalente da aula-expositiva tradicional, compreendida como aquela em que o professor é o transmissor do conhecimento para os alunos que o recepcionam passivamente. Além disso, o ensino de direito também é marcado por um excessivo apego ao positivismo, o que resulta na ausência de espaço para a construção de uma reflexão crítica sobre a ciência do direito. O estudo analisa as bases teóricas sobre as quais se constrói o paradigma da aprendizagem baseada em problemas, e a possibilidade dele ser aplicado ao ensino de direito no Brasil. A pesquisa discute os aspectos relevantes da experiência desenvolvida, em especial os problemas construídos e apresentados aos alunos, discutindo as formas como eles construíram as soluções. Analisa, ainda, a atuação dos alunos em trabalhos de campo, com adolescentes e comerciantes de Bragança Paulista, momento em que ensinaram a pessoas leigas aspectos relevantes de direito do consumidor e de relações de consumo
59

Analysis, Diagnosis and Design for System-level Signal and Power Integrity in Chip-package-systems

Ambasana, Nikita January 2017 (has links) (PDF)
The Internet of Things (IoT) has ushered in an age where low-power sensors generate data which are communicated to a back-end cloud for massive data computation tasks. From the hardware perspective this implies co-existence of several power-efficient sub-systems working harmoniously at the sensor nodes capable of communication and high-speed processors in the cloud back-end. The package-board system-level design plays a crucial role in determining the performance of such low-power sensors and high-speed computing and communication systems. Although there exist several commercial solutions for electromagnetic and circuit analysis and verification, problem diagnosis and design tools are lacking leading to longer design cycles and non-optimal system designs. This work aims at developing methodologies for faster analysis, sensitivity based diagnosis and multi-objective design towards signal integrity and power integrity of such package-board system layouts. The first part of this work aims at developing a methodology to enable faster and more exhaustive design space analysis. Electromagnetic analysis of packages and boards can be performed in time domain, resulting in metrics like eye-height/width and in frequency domain resulting in metrics like s-parameters and z-parameters. The generation of eye-height/width at higher bit error rates require longer bit sequences in time domain circuit simulation, which is compute-time intensive. This work explores learning based modelling techniques that rapidly map relevant frequency domain metrics like differential insertion-loss and cross-talk, to eye-height/width therefore facilitating a full-factorial design space sweep. Numerical results performed with artificial neural network as well as least square support vector machine on SATA 3.0 and PCIe Gen 3 interfaces generate less than 2% average error with order of magnitude speed-up in eye-height/width computation. Accurate power distribution network design is crucial for low-power sensors as well as a cloud sever boards that require multiple power level supplies. Achieving target power-ground noise levels for low power complex power distribution networks require several design and analysis cycles. Although various classes of analysis tools, 2.5D and 3D, are commercially available, the presence of design tools is limited. In the second part of the thesis, a frequency domain mesh-based sensitivity formulation for DC and AC impedance (z-parameters) is proposed. This formulation enables diagnosis of layout for maximum impact in achieving target specifications. This sensitivity information is also used for linear approximation of impedance profile updates for small mesh variations, enabling faster analysis. To enable designing of power delivery networks for achieving target impedance, a mesh-based decoupling capacitor sensitivity formulation is presented. Such an analytical gradient is used in gradient based optimization techniques to achieve an optimal set of decoupling capacitors with appropriate values and placement information in package/boards, for a given target impedance profile. Gradient based techniques are far less expensive than the state of the art evolutionary optimization techniques used presently for a decoupling capacitor network design. In the last part of this work, the functional similarities between package-board design and radio frequency imaging are explored. Qualitative inverse-solution methods common to the radio frequency imaging community, like Tikhonov regularization and Landweber methods are applied to solve multi-objective, multi-variable signal integrity package design problems. Consequently a novel Hierarchical Search Linear Back Projection algorithm is developed for an efficient solution in the design space using piecewise linear approximations. The presented algorithm is demonstrated to converge to the desired signal integrity specifications with minimum full wave 3D solve iterations.
60

Deep Learning Studies for Vision-based Condition Assessment and Attribute Estimation of Civil Infrastructure Systems

Fu-Chen Chen (7484339) 14 January 2021 (has links)
Structural health monitoring and building assessment are crucial to acquire structures’ states and maintain their conditions. Besides human-labor surveys that are subjective, time-consuming, and expensive, autonomous image and video analysis is a faster, more efficient, and non-destructive way. This thesis focuses on crack detection from videos, crack segmentation from images, and building assessment from street view images. For crack detection from videos, three approaches are proposed based on local binary pattern (LBP) and support vector machine (SVM), deep convolution neural network (DCNN), and fully-connected network (FCN). A parametric Naïve Bayes data fusion scheme is introduced that registers video frames in a spatiotemporal coordinate system and fuses information based on Bayesian probability to increase detection precision. For crack segmentation from images, the rotation-invariant property of crack is utilized to enhance the segmentation accuracy. The architectures of several approximately rotation-invariant DCNNs are discussed and compared using several crack datasets. For building assessment from street view images, a framework of multiple DCNNs is proposed to detect buildings and predict their attributes that are crucial for flood risk estimation, including founding heights, foundation types (pier, slab, mobile home, or others), building types (commercial, residential, or mobile home), and building stories. A feature fusion scheme is proposed that combines image feature with meta information to improve the predictions, and a task relation encoding network (TREncNet) is introduced that encodes task relations as network connections to enhance multi-task learning.

Page generated in 0.0792 seconds