Spelling suggestions: "subject:"gig data"" "subject:"gig mata""
291 |
Projektplaneringens hantering av big data : En studie av företag som utvecklar medicinteknsika produkter / Project Planning Management of Big Data : A study of companies developing Medtech productsDJURSÉN, WILLIAM, SÖDERBERG, JOHN January 2020 (has links)
En ökad användning av teknologi inom det medicinska fältet har öppnat möjligheter att använda patientdata i nya medicintekniska produkter. Genom användningen av sensorer och appar har det skapats en stor mängd patientdata. I detta sammanhang har begreppet big data inträffat. Dock finns det ingen tydlig definition för att beskriva big data eftersom det inte finns någon gräns för mycket data som krävs för termen. Syftet med denna undersökning är att granska om och i så fall hur medicinsktekniska företag anpassar sin projektplanering med hänsyn till möjlig användning av big data. Målet är också att studera de problem och möjligheterna som kan uppstå kring användningen av big data. Undersökning består av en litteraturundersökning av medicinteknik, projektledning och big data. Den andra delen av undersökningen är genom semistrukturerade intervjuer. Två intervjuer gjordes med olika respondenter som arbetar inom medicintekniska projekt. Projektledning har varierande modeller för olika faser i ett projekt. Förstudien är en viktigt fas speciellt om det är ett stort och komplex projekt. Att genomföra en noggrann förstudie gör det enklare att upptäcka problem och möjligheter i projektet. Även lagar och olika klassifikationer av medicintekniska produkter måste tas hänsyn till eftersom produkter kan orsaka positiva och negativa effekter på samhället och individer. Resultat av denna studie visar att det inte är någon speciellt anpassad förstudie i företaget för att hantera big data i medicintekniska projekt. Däremot sker en planering som är antingen lik eller en variant av modellerna från litteratur undersökningen. Fortsatt arbete med implementering och ett bredare perspektiv på projektlednings modeler kan vara till nytta för att slutföra big data orienterade projekt. Lagstiftande av medicinsktekniska produkter skapar tidskrävande problem i utvecklingsfasen eftersom en CE-certification krävs. Med användandet av teknologi kan företag undvika problemet av att utvärdera klassificerad patientdata genom att analysera annan information och på så sätt undvika hög klassificering av den medicintekniska produkten. / A growing use of technology in the medical field has created an opportunity to use patient data in new Medtech products. Through the use of sensors and apps it has created a vast amount of patient data. In this context the concept of big data has occurred. However there is no clear definition to describe big data because there is no limit of much data is required for the term. The purpose of the research project is to examine if and how companies in Medtech adapt their project planning in regards to a possible use of big data.The purpose is also to study the problems and opportunities that may occur in the use of big data. The study consists of a literature study on Medtech, project management and big data. The second part of the research is through semi-structured interviews. Two interviews have been conducted with different respondents that work with Medtech projects. Project management has various models for different phases of the project. The preplanning phase is an important stage especially if it’s a big and complex project. By carrying out the pre-planning it is easier to detect problems and opportunities in the project. Also laws and different classification of Medtech products needs to be considered due to the product may have positive and negative effects on society and individuals. The results of this study shows there is no special adapted pre-planning in the company to handle big data in MedTech projects. However, the planning that occurs can be either similar or a different variant of the models from the litteratur research. Further work in implementing and broadening the perspective on the project management models can benefit the completion of big data oriented projects. The legislation of medical devices creates a time consuming problem in the developing face because of the need for CE-certification. With the use of technology, companies are able to bypass the problem of evaluating classified patient data by analyzing different information and thus avoid high classification of medical products
|
292 |
DIGITALISERINGENS HINDER INOM TILLVERKNINGSINDUSTRIN : Ett kandidatexamensarbete om vilka hinder som finns för digitalisering under bearbetningsprocessenAndersson, Hampus, Jemdahl, Maria January 2020 (has links)
Industrin genomgår sin fjärde industriella revolution, maskiner och hela fabriker kopplar upp sig till molnet och börjar prata med varandra. Detta möjliggör stora resurseffektiviseringar om det utförs på rätt sätt. Utvecklingen av nya metoder för insamling av data medför inte bara möjligheter utan skapar även nya frågor som industrin inte handskats med tidigare. Syftet med arbetet har varit att titta närmare på hur företag och industrin generellt sett ser på omställningen och vilka hinder som de anser är svårast när det kommer till implementering av ny teknik. Men även hur företagen bakom dessa nya tjänster och produkter jobbar med hinder under omställningen. Som grund för rapporten ligger en litteraturstudie där fakta om de tekniska lösningar som finns och en bakgrund till skärande bearbetning presenteras, vilket också varit det område som arbetet begränsats till. Personer har intervjuats för att få en inblick i hur olika företag jobbar med dessa frågor och kunna jämföra med litteraturstudien. Resultatet från arbetet visar att industrin står inför många svåra beslut, samtidigt är alla överens om att digitalisering är den rätta vägen att gå. Det främsta hindret är kompetensfrågan. Hos de som köper digitala tjänster och produkter anser dem i de flesta fall att kompetens som krävs för att kunna implementera tekniken på effektivast sätt saknas. De utvecklande företagen står inför problem med att det många gånger är utmaningar de inte jobbat med tidigare och därför saknar kompetens för själva utvecklingen. Övriga hinder som arbetet påvisat handlar om kostnader, gemensamma standarder och datasäkerhet. / The industry of today is going through its fourth revolution where machines and factories are starting to connect to the cloud and communicate with each other (Internet of Things). If performed correct this enables great resource efficiency. The development of new methods of data collection brings not only possibilities but also raises new questions that the industry hasn't dealt with before. The purpose with this project has been to take a further look into how companies and the industry generally approach the adaptation within digitalization and which difficulties that are seen as most demanding when it comes to the implementing of new methods. Also, how companies behind new services and products work with difficulties during development. The base for this project has been a literature study where facts about some of today's technical solutions are presented, a background to metal cutting which is an area of limitation for this project is presented as well. Interviews with persons have been made to get a picture of how different companies are working with these questions and to compare them with the information received from the literature study. The result of this project shows that the industry stands in front of many difficult decisions but with a common view that digitalization is the right way to go. The most outstanding obstacle is concerning competence and the lack of it. Those who purchase digital products and services consider themselves to lack the right knowledge to implement new technologies in the most efficient way. The developing companies faces challenges when it comes to finding the right competence regarding the development itself since it often is new challenges that they not have encountered before. Further impediments that the project has shown are cost, standards and data security.
|
293 |
[pt] RUMO A CIDADES MAIS INTELIGENTES: ESTRATÉGIAS PARA INTEGRAR DADOS QUANTITATIVOS E QUALITATIVOS POR MEIO DE PROCESSOS DE DESIGN PARTICIPATIVO / [en] TOWARDS SMARTER CITIES: STRATEGIES TO INTEGRATE QUANTITATIVE AND QUALITATIVE DATA BY PARTICIPATORY DESIGN PROCESSRAQUEL CORREA CORDEIRO 28 May 2024 (has links)
[pt] O conceito de cidades inteligentes é frequentemente associado a avanços
tecnológicos, porém também abrange aspectos do bem-estar dos cidadãos e a
sustentabilidade. A crescente disponibilidade de dados digitais resulta em um foco
excessivo na tecnologia, negligenciando a participação cidadã e subutilizado
consequentemente o potencial dessas informações. A nossa hipótese é que o design
pode facilitar o acesso a dados urbanos complexos por meio de narrativas baseadas
em dados e de processos participativos com a população. Logo, testamos um processo
de co-design utilizando métodos mistos para analisar o comportamento de
mobilidade. Estruturada em duas fases, a pesquisa inicialmente explorou projetos de
mobilidade, analisando relatórios da iniciativa Civitas e entrevistando profissionais
atuantes na área. Os desafios e soluções identificados foram testados na segunda fase,
usando métodos como coleta de dados abertos municipais, diário de uso e análise de
sentimentos em redes sociais. Por fim, foi realizado um workshop de co-design
usando ferramentas de visualização de dados para co-analisar a relação dos efeitos
meteorológicos na mobilidade urbana. Os resultados destacam o potencial do
designer como mediador, com participantes relatando facilidade para analisar
volumes substanciais de dados e considerando a proposta inovadora e agradável.
Pesquisas futuras poderiam avaliar a compreensão dos dados pelos participantes. A
contribuição desta tese reside em um processo de co-design que pode incluir diversos
atores, como governo, setor privado e cidadãos, utilizando ferramentas de narrativas
baseadas em dados, aplicáveis a quaisquer projetos com vasto volume de informação. / [en] The concept of smart cities is often associated with technological
advancement, but it also encompasses aspects of citizen well-being and
sustainability. The growing availability of digital data results in an excessive focus
on technology, neglecting citizen participation and consequently underutilizing the
potential of this information. Our hypothesis is that design can facilitate access to
complex urban data through data storytelling and participatory processes.
Therefore, we tested a co-design process using mixed methods to analyze mobility
behavior. Structured in two phases, the study initially explored mobility projects by
analyzing reports from the Civitas initiative and interviewing professionals in the
field. The identified challenges and solutions were then tested in the second phase,
employing data collection methods such as city open data analysis, diary studies,
and sentiment analysis on social media. Finally, a co-design workshop was
conducted incorporating data visualization tools to co-analyze the weather effects
on urban mobility. The results highlight the significant potential of the designer as
a facilitator, with participants reporting ease in analyzing substantial data volumes
and considering the proposal innovative and enjoyable. Future research may
evaluate participants understanding of the data. The contribution of this thesis lies
in a co-design process that can involve various stakeholders, including government,
private enterprises, and citizens, using data storytelling tools applicable to any
project dealing with large data volumes.
|
294 |
The utilization of BDA in digital marketing strategies of international B2B organizations from a dynamic capability´s perspective : A qualitative case studyJonsdottir, Hugrun Dis January 2024 (has links)
In B2B organizations, the adoption of digital marketing strategies has increased, leading to the collection of large amounts of data, big data. This has enabled the use of big data analytics, BDA, to uncover valuable insights for digital marketing purpose. Yet, there is limited research on how the B2B organizations integrate and utilize BDA in their digital marketing strategies, especially in the international context. This study aimed to address this research gap by examining how international B2B organizations integrate and utilize BDA in their digital marketing strategy, employing a dynamic capabilities perspective. The methodology of qualitative case study was applied, focusing on two established Swedish B2B organizations with an international presence. Empirical data was collected through semi-structured interviews and complemented with document analysis. Through abductive approach and hermeneutic interpretation, the findings show that despite the need for internal structural improvements, international B2B organizations are actively integrating BDA into their digital marketing strategies. By developing new routines and skills, these organizations can navigate the challenges posed by BDA while harnessing its benefits. Additionally, a framework comprising 10 practices in which international B2B organizations leverage BDA is proposed.
|
295 |
[pt] A RESPONSABILIDADE CIVIL PELOS DANOS CAUSADOS POR SISTEMAS DOTADOS DE INTELIGÊNCIA ARTIFICIAL / [en] CIVIL LIABILITY FOR DAMAGES CAUSED BY ARTIFICIAL INTELLIGENCE SYSTEMSSTEFANNIE MYRIAM QUELHAS BILLWILLER 10 October 2024 (has links)
[pt] Após a exposição de noções gerais, conceitos e princípios atinentes à
inteligência artificial, bem como a utilização de guias deontológicos utilizados para
nortear esse projeto de pesquisa, apresentar-se-á as problemáticas envolvendo a
utilização dos sistemas inteligentes e os reflexos na vida cotidiana das pessoas. No
cenário atual, o presente estudo visa debater os impactos da IA e prováveis danos
injustos que os sistemas cometem contra determinados grupos de pessoas; os
chamados danos algorítmicos. Dessa forma, quando e a quem recorrer quando tais
danos são cometidos? Quem é civilmente responsável por arcar com eventuais
prejuízos que as vítimas venham a sofrer? Ressalte-se que ainda não há legislação
específica sobre o assunto em vigor, e tal tema será também analisado no âmbito
desta pesquisa: será que o ordenamento jurídico brasileiro já possui ferramentas
para dirimir as questões que hoje a IA apresenta no âmbito da responsabilidade
civil? / [en] After the presentation of general notions, concepts and principles relating to artificial intelligence, as well as the use of ethical guides used to guide this research project, the problems involving the use of intelligent systems and the consequences in the daily lives of people. In the current scenario, this study aims to debate the impacts of AI and probable unfair harm that systems commit against certain groups of people; so-called algorithmic damages. Therefore, when and to whom can we turn when such damages are committed? Who is civilly responsible for bearing any losses that victims may suffer? It should be noted that to date there is no specific legislation on the subject in force, and this topic will also be analyzed within the scope of this research: does the Brazilian legal system already have tools to resolve the issues that AI presents today in the context of civil liability?
|
296 |
Revealing the Non-technical Side of Big Data Analytics : Evidence from Born analyticals and Big intelligent firmsDenadija, Feda, Löfgren, David January 2016 (has links)
This study aspired to gain a more a nuanced understanding of the emerging analytics technologies and the vital capabilities that ultimately drive evidence-based decision making. Big data technology is widely discussed by varying groups in society and believed to revolutionize corporate decision making. In spite of big data's promising possibilities only a trivial fraction of firms deploying big data analytics (BDA) have gained significant benefits from their initiatives. Trying to explain this inability we leaned back on prior IT literature suggesting that IT resources can only be successfully deployed when combined with organizational capabilities. We identified key theoretical components at an organizational, relational, and human level. The data collection included 20 interviews with decision makers and data scientist from four analytical leaders. Early on we distinguished the companies into two categories based on their empirical characteristics. The terms “Born analyticals” and “Big intelligent firms” were coined. The analysis concluded that social, non-technical elements play a crucial role in building BDA abilities. These capabilities differ among companies but can still enable BDA in different ways, indicating that organizations´ history and context seem to influence how firms deploy capabilities. Some capabilities have proven to be more important than others. The individual mindset towards data is seemingly the most determining capability in building BDA ability. Varying mindsets foster different BDA-environments in which other capabilities behave accordingly. Born analyticals seemed to display an environment benefitting evidence based decisions.
|
297 |
Benefits, business considerations and risks of big dataSmeda, Jorina 04 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: Big data is an emerging technology and its use holds great potential and benefits for
organisations. The governance of this technology is something that is still a big
concern and as aspect for which guidance to organisations wanting to use this
technology is still lacking.
In this study an extensive literature review was conducted to identify and define the
business imperatives distinctive of an organisation that will benefit from the use of
big data. The business imperatives were identified and defined based on the
characteristics and benefits of big data. If the characteristics and benefits are clear,
the relevant technology will be better understood. Furthermore, the business
imperatives provide business managers with guidance to whether their organisation
will benefit from the use of this technology or not.
The strategic and operational risks related to the use of big data were also identified
and they are discussed in this assignment, based on a literature review. The risks
specific to big data are highlighted and guidance is given to business managers as to
which risks should be addressed when using big data. The risks are then mapped
against COBIT 5 (Control Objectives for Information and Related Technology) to
highlight the processes most affected when implementing and using big data,
providing business managers with guidance when governing this technology. / AFRIKAANSE OPSOMMING: ‘Big data’ is 'n ontwikkelende tegnologie en die gebruik daarvan hou baie groot
potensiaal en voordele vir besighede in. Die bestuur van hierdie tegnologie is egter ʼn
groot bron van kommer en leiding aan besighede wat hierdie tegnologie wil gebruik
ontbreek steeds.
Deur middel van 'n uitgebreide literatuuroorsig is die besigheidsimperatiewe
kenmerkend van 'n besigheid wat voordeel sal trek uit die gebruik van ‘big data’
geïdentifiseer. Die besigheidsimperatiewe is geïdentifiseer en gedefinieer gebaseer
op die eienskappe en voordele van ‘big data’. Indien die eienskappe en voordele
behoorlik verstaan word, is 'n beter begrip van die tegnologie moontlik.
Daarbenewens bied die besigheidsimperatiewe leiding aan bestuur sodat hulle in
staat kan wees om te beoordeel of hulle besigheid voordeel sal trek uit die gebruik
van hierdie tegnologie of nie.
Die strategiese en operasionele risiko's wat verband hou met die gebruik van ‘big
data’ is ook geïdentifiseer en bespreek, gebaseer op 'n literatuuroorsig. Dit
beklemtoon die risiko's verbonde aan ‘big data’ en daardeur word leiding verskaf aan
besigheidsbestuurders ten opsigte van watter risiko's aangespreek moet word
wanneer ‘big data’ gebruik word. Die risiko's is vervolgens gekarteer teen COBIT 5
(‘Control Objectives for Information and Related Technology’) om die prosesse wat
die meeste geraak word deur die gebruik van ‘big data’ te beklemtoon, ten einde
leiding te gee aan besigheidsbestuurders vir die beheer en kontrole van hierdie
tegnologie.
|
298 |
Social Media und Banken – Die Reaktionen von Facebook-Nutzern auf Kreditanalysen mit Social Media Daten / Social Media and Banks – Facebook Users Reactions to Meta Data Based Credit AnalysisThießen, Friedrich, Brenger, Jan Justus, Kühn, Annemarie, Gliem, Georg, Nake, Marianne, Neuber, Markus, Wulf, Daniel 14 March 2017 (has links) (PDF)
Der Trend zur Auswertung aller nur möglichen Datenbestände für kommerzielle Zwecke ist eine nicht mehr aufzuhaltende Entwicklung. Auch für die Kreditwürdigkeitsprüfung wird überlegt, Daten aus Sozialen Netzwerken einzusetzen. Die Forschungsfrage entsteht, wie die Nutzer dieser Netzwerke reagieren, wenn Banken ihre privaten Profile durchsuchen. Mit Hilfe einer Befragung von 271 Probanden wurde dieses Problem erforscht. Die Ergebnisse sind wie folgt:
Die betroffenen Bürger sehen die Entwicklung mit Sorge. Sie begreifen ganz rational die neuen Geschäftsmodelle und ihre Logik und erkennen die Vorteile. Sie stehen dem Big-Data-Ansatz nicht vollkommen ablehnend gegenüber. Abgelehnt wird es aber, wenn sich Daten aus sozialen Medien negativ für eine Person auswirken. Wenn man schon sein Facebook-Profil einer Bank öffnet, dann will man einen Vorteil davon haben, keinen Nachteil. Ein Teil der Gesellschaft lehnt das Schnüffeln in privaten Daten strikt ab. Insgesamt sind die Antworten deutlich linksschief verteilt mit einem sehr dicken Ende im ablehnenden Bereich. Das Schnüffeln in privaten Daten wird als unethisch und unfair empfunden. Die Menschen fühlen sich im Gegenzug berechtigt, ihre Facebook-Daten zu manipulieren. Eine wie-Du-mir-so-ich-Dir-Mentalität ist festzustellen. Wer kommerziell ausgeschnüffelt wird, der antwortet kommerziell mit Manipulationen seiner Daten.
Insgesamt ist Banken zu raten, nicht Vorreiter der Entwicklung zu sein, sondern abzuwarten, welche Erfahrungen Fintechs machen. Banken haben zu hohe Opportunitätskosten in Form des Verlustes von Kundenvertrauen. / The trend to analyze all conceivable data sets for commercial purposes is unstoppable. Banks and fintechs try to use social media data to assess the creditworthiness of potential customers. The research question is how social media users react when they realize that their bank evaluates personal social media profiles. An inquiry among 271 test persons has been performed to analyze this problem. The results are as follows:
The persons are able to rationally reflect the reasons for the development and the logic behind big data analyses. They realize the advantages, but also see risks. Opening social media profiles to banks should not lead to individual disadvantages. Instead, people expect an advantage from opening their profiles voluntarily. This is a moral attitude. An important minority of 20 to 30 % argues strictly against the commercial use of social media data. When people realize that they cannot prevent the commercial use of private data, they start to manipulate them. Manipulation becomes more extensive when test persons learn about critical details of big data analyses. Those who realize that their private data are used commercially think it would be fair to answer in the same style. So the whole society moves into a commercial direction.
To sum up, banks should be reluctant and careful in analyzing private client big data. Instead, banks should give the lead to fintechs as they have fewer opportunity costs, because they do not depend on good customer relations for related products.
|
299 |
User Adoption of Big Data Analyticsin the Public SectorAkintola, Abayomi Rasheed January 2019 (has links)
The goal of this thesis was to investigate the factors that influence the adoption of big data analytics by public sector employees based on the adapted Unified Theory of Acceptance and Use of Technology (UTAUT) model. A mixed method of survey and interviews were used to collect data from employees of a Canadian provincial government ministry. The results show that performance expectancy and facilitating conditions have significant positive effects on the adoption intention of big data analytics, while effort expectancy has a significant negative effect on the adoption intention of big data analytics. The result shows that social influence does not have a significant effect on adoption intention. In terms of moderating variables, the results show that gender moderates the effects of effort expectancy, social influence and facilitating condition; data experience moderates the effects of performance expectancy, effort expectancy and facilitating condition; and leadership moderates the effect of social influence. The moderation effects of age on performance expectancy, effort expectancy is significant for only employees in the 40 to 49 age group while the moderation effects of age on social influence is significant for employees that are 40 years and more. Based on the results, implications for public sector organizations planning to implement big data analytics were discussed and suggestions for further research were made. This research contributes to existing studies on the user adoption of big data analytics.
|
300 |
Ballstering : un algorithme de clustering dédié à de grands échantillons / Ballstering : a clustering algorithm for large datasetsCourjault-Rade, Vincent 17 April 2018 (has links)
Ballstering appartient à la famille des méthodes de machine learning qui ont pour but de regrouper en classes les éléments formant la base de données étudiée et ce sans connaissance au préalable des classes qu'elle contient. Ce type de méthodes, dont le représentant le plus connu est k-means, se rassemblent sous le terme de "partitionnement de données" ou "clustering". Récemment un algorithme de partitionnement "Fast Density Peak Clustering" (FDPC) paru dans le journal Science a suscité un intérêt certain au sein de la communauté scientifique pour son aspect innovant et son efficacité sur des données distribuées en groupes non-concentriques. Seulement cet algorithme présente une complexité telle qu'il ne peut être aisément appliqué à des données volumineuses. De plus nous avons pu identifier plusieurs faiblesses pouvant nuire très fortement à la qualité de ses résultats, dont en particulier la présence d'un paramètre général dc difficile à choisir et ayant malheureusement un impact non-négligeable. Compte tenu de ces limites, nous avons repris l'idée principale de FDPC sous un nouvel angle puis apporté successivement des modifications en vue d'améliorer ses points faibles. Modifications sur modifications ont finalement donné naissance à un algorithme bien distinct que nous avons nommé Ballstering. Le fruit de ces 3 années de thèse se résume principalement en la conception de ce dernier, un algorithme de partitionnement dérivé de FDPC spécialement conçu pour être efficient sur de grands volumes de données. Tout comme son précurseur, Ballstering fonctionne en deux phases: une phase d'estimation de densité suivie d'une phase de partitionnement. Son élaboration est principalement fondée sur la construction d'une sous-procédure permettant d'effectuer la première phase de FDPC avec une complexité nettement amoindrie tout évitant le choix de dc qui devient dynamique, déterminé suivant la densité locale. Nous appelons ICMDW cette sous-procédure qui représente une partie conséquente de nos contributions. Nous avons également remanié certaines des définitions au cœur de FDPC et revu entièrement la phase 2 en s'appuyant sur la structure arborescente des résultats fournis par ICDMW pour finalement produire un algorithme outrepassant toutes les limitations que nous avons identifié chez FDPC. / Ballstering belongs to the machine learning methods that aim to group in classes a set of objects that form the studied dataset, without any knowledge of true classes within it. This type of methods, of which k-means is one of the most famous representative, are named clustering methods. Recently, a new clustering algorithm "Fast Density Peak Clustering" (FDPC) has aroused great interest from the scientific community for its innovating aspect and its efficiency on non-concentric distributions. However this algorithm showed a such complexity that it can't be applied with ease on large datasets. Moreover, we have identified several weaknesses that impact the quality results and the presence of a general parameter dc difficult to choose while having a significant impact on the results. In view of those limitations, we reworked the principal idea of FDPC in a new light and modified it successively to finally create a distinct algorithm that we called Ballstering. The work carried out during those three years can be summarised by the conception of this clustering algorithm especially designed to be effective on large datasets. As its Precursor, Ballstering works in two phases: An estimation density phase followed by a clustering step. Its conception is mainly based on a procedure that handle the first step with a lower complexity while avoiding at the same time the difficult choice of dc, which becomes automatically defined according to local density. We name ICMDW this procedure which represent a consistent part of our contributions. We also overhauled cores definitions of FDPC and entirely reworked the second phase (relying on the graph structure of ICMDW's intermediate results), to finally produce an algorithm that overcome all the limitations that we have identified.
|
Page generated in 0.0662 seconds