(ITA - 2020 - 1 FASE) Experts warn that the substitution of machinery for human labour may render the population redundant. They worry that the discovery of this mighty power has come before we knew how to employ it rightly. Such fears are expressed today by those who worry that advances in artificial intelligence (AI) could destroy millions of jobs and pose a Terminator-style threat to humanity. But these are in fact the words of commentators discussing mechanisation and steam power two centuries ago. Back then the controversy over the dangers posed by machines was known as the machinery question. Now a very similar debate is under way. After many false dawns, AI has made extraordinary progress in the past few years, thanks to a versatile technique called deep learning. Given enough data, large (or deep) neural networks, modelled on the brains architecture, can be trained to do all kinds of things. They power Googles search engine, Facebooks automatic photo tagging, Apples voice assistant, Amazon s shopping recommendations and Tesias self-driving cars. But this rapid progress has also led to concerns about safety and job losses. Stephen Hawking, Elon Musk and others wonder whether AI could get out of control, precipitating a sci-fi conflict between people and machines. Others worry that AI will cause widespread unemployment, by automating cognitive tasks that could previously be done only by people. After 200 years, the machinery question is back. It needs to be answered. Fonte:https://www.economist.com/leaders/2016/06/25/march-of-the-machines.Adaptado. Acesso em agosto de 2019. Leia as afirmaes a seguir para responder questo. I. Redes neurais alimentam o mecanismo de busca do Google, o assistente de voz da Apple,a identificao de fotografias no Facebook, as sugestes de compras da Amazon, os c autnomos da Tesla. II. O temor de que as mquinas substituiriam o trabalho humano era real h duzentos anos, mas superado na atualidade. III. Steven Hawkings e Elon Musk especulam se a I.A. pode sair do controle, levando pessoas e mquinas a um conflito somente visto em obras de fico cientfica. IV. Duzentos anos atrs, a controvrsia sobre os perigos impostos pelas mquinas era conhecida comoa questo das mquinas.
(ITA - 2020 - 1 FASE) [Adaptada] Leia o texto e assinale a alternativa correta Experts warn that the substitution of machinery for human labour may render the population redundant. They worry that the discovery of this mighty power has come before we knew how to employ it rightly. Such fears are expressed today by those who worry that advances in artificial intelligence (AI) could destroy millions of jobs and pose a Terminator-style threat to humanity. But these are in fact the words of commentators discussing mechanisation and steam power two centuries ago. Back then the controversy over the dangers posed by machines was known as the machinery question. Now a very similar debate is under way. After many false dawns, AI has made extraordinary progress in the past few years, thanks to a versatile technique called deep learning. Given enough data, large (or deep) neural networks, modelled on the brains architecture, can be trained to do all kinds of things. They power Googles search engine, Facebooks automatic photo tagging, Apples voice assistant, Amazon s shopping recommendations and Tesias self-driving cars. But this rapid progress has also led to concerns about safety and job losses. Stephen Hawking, Elon Musk and others wonder whether AI could get out of control, precipitating a sci-fi conflict between people and machines. Others worry that AI will cause widespread unemployment, by automating cognitive tasks that could previously be done only by people. After 200 years, the machinery question is back. It needs to be answered. Fonte: https://www.economist.com/leaders/2016/06/25/march-of-the-machines. Adaptado. Acesso em agosto de 2019. A palavra sublinhada nos trechos retirados do texto pode ser substituda, sem alterao designificado, pela palavra ou expresso da segunda coluna, exceto em
(ITA - 2020 - 1 FASE) If there is any doubt about the persistent power of literature in the face of digital culture, it should be banished by the recent climb of George Orwells 1984 up the Amazon Movers and Shakers list. There is much thats resonant for us in Orwells dystopia in the face of Edward Snowdens revelations about the NSA [...]. We look to 1984 as a clear cautionary tale, even a prophecy, of systematic abuse of power taken to the end of the line. [...] However, after THE END of his dystopian novel 1984, George Orwell includes another chapter, an appendix, called The Principles of Newspeak. Since it has the trappings of a tedious scholarly treatise, readers often skip the appendix. But it changes our whole understanding of the novel. Written from some unspecified point in the future, it suggest that Big Brother was eventually defeated. The victory is attributed not to individual rebels or to The Brotherhood, an anonymous resistance group, but rather to language itself. The appendix details Oceanias attempt to replace Oldspeak, or English, with Newspeak, a linguistic shorthand that reduces the world of ideas to a set of simple, stark words. The whole aim of Newspeak is to narrow the range of thought. It will render dissent literally impossible, because there will be no words in which to express it. Fonte: Frost, Laura. http://qz.com/95696. Adaptado. Acesso em agosto de 2019. De acordo com o texto, em geral, os leitores do clssico 1984, de George Orwell, dispensam a leitura do apndice da obra porque
(ITA - 2020 - 1 FASE) Leia o textoe assinale a alternativa correta. If there is any doubt about the persistent power of literature in the face of digital culture, it should be banished by the recent climb of George Orwells 1984 up the Amazon Movers and Shakers list. There is much that s resonant for us in Orwells dystopia in the face of Edward Snowdens revelations about the NSA [...]. We look to 1984 as a clear cautionary tale, even a prophecy, of systematic abuse of power taken to the end of the Tine. [...] However, after THE END of his dystopian novel 1984, George Orwell includes another chapter, an appendix, called The Principies of Newspeak. Since it has the trappings of a tedious scholarly treatise, readers often skip the appendix. But it changes our whole understanding of the novel. Written from some unspecified point in the future, it suggests that Big Brother was eventually defeated. The victory is attributed not to individual rebels or to The Brotherhood, an anonymous resistance group, but rather to language itself. The appendix details Oceanias attempt to replace Oldspeak, or English, with Newspeak, a linguistic shorthand that reduces the world of ideas to a set of simple, stark words. The whole aim of Newspeak is to narrow the range of thought. It will render dissent literally impossible, because there will be no words in which to express it. Fonte: Frost, Laura. http://qz.com/95696. Adaptado. Acesso em agosto de 2019 No trecho but rather, to language itself, o termo rather pode ser substitudo, sem alterao de sentido, por
(ITA - 2020 - 1 FASE) Considere o texto Text II De acordo com o texto, incorreto afirmar que
(ITA - 2020 - 1 FASE) Of course theyre fake videos, everyone can see theyre not real. All the same, they really did say those things, didnt they? These are the words of Vivienne Rook, the fictional politician played by Emma Thompson in the brilliant dystopian BBC TV drama Years and Years. The episode in question, set in 2027, tackles the subject of deepfakes - videos in which a living persons face and voice are digitally manipulated to say anything the programmer wants. Rook perfectly sums up the problem with these videos - even if you know they are fake, they leave a lingering impression. And her words are ali the more compelling because deepfakes are real and among us already. Last year, several deepfake porn videos emerged online, appearing to show celebrities such as Emma Watson, Gal Gadot and Taylor Swift in explicit situations. [...] In some cases, the deepfakes are almost indistinguishable from the real thing - which is particularly worrying for politicians and other people in the public eye. Videos that may initially have been created for laughs could easily be misinterpreted by viewers. Earlier this year, for example, a digitally altered video appeared to show Nancy Pelosi, the speaker of the US House of Representatives, slurring drunkenly through a speech. The video was widely shared on Facebook and YouTube, before being tweeted by President Donald Trump with the caption: PELOSI STAMMERS THROUGH NEWS CONFERENCE. The video was debunked, but not before it had been viewed millions of times. Trump has still not deleted the tweet, which has been retweeted over 30,000 times. The current approach of social media companies is to filter out and reduce the distribution of deepfake videos, rather than outright removing them - unless they are pornographic. This can result in victims suffering severe reputational damage, not to mention ongoing humiliation and ridicule from viewers. Deepfakes are one of the most alarming trends I have witnessed as a Congresswoman to date, said US Congresswoman Yvette Clarke in a recent article for Quartz. If the American public can be made to believe and trust altered videos of presidential candidates, our democracy is in grave danger. We need to work together to stop deepfakes from becoming the defining feature of the 2020 elections. Of course, its not just democracy that is at risk, but also the economy, the legal system and even individuais themselves. Clarke warns that, if deepfake technology continues to evolve without a check, video evidente could lose its credibility during trials. It is not hard to imagine it being used by disgruntled ex-lovers, employees and random people on the internet to exact revenge and ruiu peoples reputations. The software for creating these videos is already widely available. Fonte: Curtis, Sophie. https://www.mirror.co.uk/tech/deepfake-videos-creepy-new-internet-18289900. Adaptado. Acessado em Agosto/2019 De acordo com o texto, correto afirmar que
(ITA - 2020 - 1 FASE) Leia o texto e assinale Of course theyre fake videos, everyone can see theyre not real. All the same, they really did say those things, didnt they? These are the words of Vivienne Rook, the fictional politician played by Emma Thompson in the brilliant dystopian BBC TV drama Years and Years. The episode in question, set in 2027, tackles the subject of deepfakes - videos in which a living persons face and voice are digitally manipulated to say anything the programmer wants. Rook perfectly sums up the problem with these videos - even if you know they are fake, they leave a lingering impression. And her words are ali the more compelling because deepfakes are real and among us already. Last year, several deepfake porn videos emerged online, appearing to show celebrities such as Emma Watson, Gal Gadot and Taylor Swift in explicit situations. [...] In some cases, the deepfakes are almost indistinguishable from the real thing - which is particularly worrying for politicians and other people in the public eye. Videos that may initially have been created for laughs could easily be misinterpreted by viewers. Earlier this year, for example, a digitally altered video appeared to show Nancy Pelosi, the speaker of the US House of Representatives, slurring drunkenly through a speech. The video was widely shared on Facebook and YouTube, before being tweeted by President Donald Trump with the caption: PELOSI STAMMERS THROUGH NEWS CONFERENCE. The video was debunked, but not before it had been viewed millions of times. Trump has still not deleted the tweet, which has been retweeted over 30,000 times. The current approach of social media companies is to filter out and reduce the distribution of deepfake videos, rather than outright removing them - unless they are pornographic. This can result in victims suffering severe reputational damage, not to mention ongoing humiliation and ridicule from viewers. Deepfakes are one of the most alarming trends I have witnessed as a Congresswoman to date, said US Congresswoman Yvette Clarke in a recent article for Quartz. If the American public can be made to believe and trust altered videos of presidential candidates, our democracy is in grave danger. We need to work together to stop deepfakes from becoming the defining feature of the 2020 elections. Of course, its not just democracy that is at risk, but also the economy, the legal system and even individuais themselves. Clarke warns that, if deepfake technology continues to evolve without a check, video evidente could lose its credibility during trials. It is not hard to imagine it being used by disgruntled ex-lovers, employees and random people on the internet to exact revenge and ruiu peoples reputations. The software for creating these videos is already widely available. Fonte:Curtis, Sophie.https://www.mirror.co.uk/tech/deepfake-videos-creepy-new-internet-18289900. Adaptado. Acessado em Agosto/2019 No trecho: its not just democracy that is at risk, but also the economy, a expresso sublinhada expressa uma ideia de
(ITA - 2020 - 1 FASE) Leia o texto e assinale Of course theyre fake videos, everyone can see theyre not real. All the same, they really did say those things, didnt they? These are the words of Vivienne Rook, the fictional politician played by Emma Thompson in the brilliant dystopian BBC TV drama Years and Years. The episode in question, set in 2027, tackles the subject of deepfakes - videos in which a living persons face and voice are digitally manipulated to say anything the programmer wants. Rook perfectly sums up the problem with these videos - even if you know they are fake, they leave a lingering impression. And her words are ali the more compelling because deepfakes are real and among us already. Last year, several deepfake porn videos emerged online, appearing to show celebrities such as Emma Watson, Gal Gadot and Taylor Swift in explicit situations. [...] In some cases, the deepfakes are almost indistinguishable from the real thing - which is particularly worrying for politicians and other people in the public eye. Videos that may initially have been created for laughs could easily be misinterpreted by viewers. Earlier this year, for example, a digitally altered video appeared to show Nancy Pelosi, the speaker of the US House of Representatives, slurring drunkenly through a speech. The video was widely shared on Facebook and YouTube, before being tweeted by President Donald Trump with the caption: PELOSI STAMMERS THROUGH NEWS CONFERENCE. The video was debunked, but not before it had been viewed millions of times. Trump has still not deleted the tweet, which has been retweeted over 30,000 times. The current approach of social media companies is to filter out and reduce the distribution of deepfake videos, rather than outright removing them - unless they are pornographic. This can result in victims suffering severe reputational damage, not to mention ongoing humiliation and ridicule from viewers. Deepfakes are one of the most alarming trends I have witnessed as a Congresswoman to date, said US Congresswoman Yvette Clarke in a recent article for Quartz. If the American public can be made to believe and trust altered videos of presidential candidates, our democracy is in grave danger. We need to work together to stop deepfakes from becoming the defining feature of the 2020 elections. Of course, its not just democracy that is at risk, but also the economy, the legal system and even individuais themselves. Clarke warns that, if deepfake technology continues to evolve without a check, video evidence could lose its credibility during trials. It is not hard to imagine it being used by disgruntled ex-lovers, employees and random people on the internet to exact revenge and ruiu peoples reputations. The software for creating these videos is already widely available. Fonte:Curtis, Sophie.https://www.mirror.co.uk/tech/deepfake-videos-creepy-new-internet-18289900. Adaptado. Acessado em Agosto/2019 De acordo com a congressista Yvette Clarke, pelos diversos riscos representados pelos vdeos deepfake, necessrio
(ITA - 2020 - 1 FASE) About seven years ago, three researchers at the University of Toronto built a system that could analyze thousands of photos and teach itself to recognize everyday objects, like dogs, cars and flowers. The system was so effective that Google bought the tiny start-up these researchers were only just getting off the ground. And soon, their system sparked a technological revolution. Suddenly, machines could see in a way that was not possible in the past. This morde it easier for a smartphone app to search your personal photos and find the images you were looking for. It accelerated the progress of driverless cars and other robotics. And it improved the accuracy of facial recognition services, for social networks like Facebook and for the country s law enforcement agencies. But soon, researchers noticed that these facial recognition services were less accurate when used with women and people of color. Activists raised concerns over how companies were collecting the huge amounts of data needed to train these kinds of systems. Others worried these systems would eventually lead to mass surveillance or autonomous weapons. Fonte: Matz, Cade. Seeking Ground Rules for A. 1. www.nvtimes.com, 01/03/2019. Adaptado. Acessado em Agosto/2019.) De acordo com as informaes do texto, selecione a alternativa que melhor complete a afirmao: The new system proved to be less precise when
(ITA -2020 - 1 FASE) About seven years ago, three researchers at the University of Toronto built a system that could analyze thousands of photos and teach itself to recognize everyday objects, like dogs, cars and flowers. The system was so effective that Google bought the tiny start-up these researchers were only just getting off the ground. And soon, their system sparked a technological revolution. Suddenly, machines could see in a way that was not possible in the past. This morde it easier for a smartphone app to search your personal photos and find the images you were looking for. It accelerated the progress of driverless cars and other robotics. And it improved the accuracy of facial recognition services, for social networks like Facebook and for the country s law enforcement agencies. But soon, researchers noticed that these facial recognition services were less accurate when used with women and people of color. Activists raised concerns over how companies were collecting the huge amounts of data needed to train these kinds of systems. Others worried these systems would eventually lead to mass surveillance or autonomous weapons. Fonte: Matz, Cade. Seeking Ground Rules for A1. www.nvtimes.com, 01/03/2019. Adaptado. Acessado em Agosto/2019.) Analise as afirmaes de I a IV em destaque. I. Ativistas manifestaram preocupao em relao forma como as empresas estavam coletando enormes quantidades de dados para treinar sistemas de reconhecimento. II. A Universidade de Toronto construiu um sistema tico de Inteligncia Artificial para reconhecimento de imagens. III. Uma das preocupaes de ativistas era a possibilidade de tais sistemas conduzirem a vigilncia em massa ou armamento autnomo. IV. Empresas privadas de tecnologia, como Google, e redes digitais, como Facebook, junto com algumas agncias governamentais, chegaram a um consenso quanto a uma tica da Inteligncia Artificial. V. Algumas leis foram desenvolvidas por alguns grupos especficos de pessoas para decidir sobre o futuro da Inteligncia Artificial. De acordo com o texto, esto corretas apenas: