Text 2 The climate is changing the thing is, it isnt just due to humans By Tonya T. Neaves 1 Natural forces beyond human control are also gradually affecting our climate. These geophysical forces are vital to understanding global warming. Man is indeed responsible for a large portion possibly even a majority of global warming. But also in play are complex gravitational interactions, including changes in the Earths orbit, axial tilt and torque. This fact needs to be included in the public debate. Because these gravitational shifts, 5 occurring over millennia, can influence climate patterns and ultimately lead to noticeable variations in seasons. Interestingly, research suggests climate change can alter the tilt of the Earth, but an unrelated change in tilt can also further change the climate. It is a balance-counterbalance relationship. Changes in seasons can also affect other types of storms, including severe winter snowstorms and tornadoes. The variations in the Earths orbit are known as the Milankovitch cycles after the Serbian geophysicist Milutin Milankovic, who hypothesized this 10 phenomenon in the 1920s. He discovered that variations in the Earths path around the Sun, axial tilt and torque could together affect our climate. Even a slight change or orientation in the precession of the Earths rotating body can cause a wobbling effect shifting torque in different areas since the planet is not a perfect sphere to some peoples surprise. Now would seem a particularly apt time to act. The 2017 Atlantic hurricane season was an intense, record-setting period. With several landfall hurricanes barreling their way through the Caribbean and 15 Gulf of Mexico, devastating parts of the Leeward Islands and United States. Still, even President Donald J. Trump has implied the whole of idea climate change may just be a hoax. Most Republicans seem to agree that it is not a serious problem. Meanwhile, while some Democrats have tried to use the frequency and intensity of storms in the hopes of highlighting the climate change conversation, even this effort has seemed muted. The heightened culture of disaster only feeds our attention on political banter and ideological semantics with no room for informed 20 decision-making. Though climate change is inevitable, we also need to have a healthy appreciation of the fact that climate shifts arent just limited to rapidly changing weather patterns. Turning the corner into unexplored territory is always difficult. By having a broader sense of communal resiliency social, political and economic standing we can manage this unavoidable pendulum of climate extremes. Adapted from: THINK - Opinion, Analysis, Essays in: https://www.nbcnews.com/think/opinion/climate-changingnot-just-because-humans-here-s-why-matters-ncna824271 [Accessed on March 10th, 2023]. Choose the wrong option according to Text 2:
Text 1 Why we should eat less meat Eating less meat is good for the planet, good(21)animals, and good for(22)health. Its not all or(23). While adopting a vegan or vegetarian diet is commendable, it(24)not be possible for everyone. The current demand for animal protein (meat, dairy, and eggs) is unsustainable. It enforces the(25)for intensive farming methods,(26)animals are inhumanely caged in small spaces that restrict their natural behaviours, selectively bred to grow(27)and pumped full of antibiotics unnecessarily. Of the 70+ billion animals farmed globally(28)year, an estimated 50 billion of them will spend their entire lives on these factory farms where(29)like commodities. A substantial reduction in animal protein consumption across the globe(30)free up land and other resources, making room for higher welfare, more sustainable production systems, that would benefit both the animals and humans. Factory farming fuels climate change, pollutes landscapes and waterways, and wastefully uses precious resources. Animal agriculture is estimated to account for nearly 15% of global greenhouse gas emissions,(31)more than all the cars, planes, and other forms of transport put together. Decreasing the demand for animal products will help conserve water, save vital habitats, reduce greenhouse gas emissions and help conserve our Earth. Industrial animal agriculture is the(32)largest driver of habitat loss and deforestation worldwide. Adapted from: World Animal Protection in: https://www.worldanimalprotection.ca/news/why-we-should-eat-lessmeat [Accessed on March 06th, 2023].
Text 1 Why we should eat less meat Eating less meat is good for the planet, good(21)animals, and good for(22)health. Its not all or(23). While adopting a vegan or vegetarian diet is commendable, it(24)not be possible for everyone. The current demand for animal protein (meat, dairy, and eggs) is unsustainable. It enforces the(25)for intensive farming methods,(26)animals are inhumanely caged in small spaces that restrict their natural behaviours, selectively bred to grow(27)and pumped full of antibiotics unnecessarily. Of the 70+ billion animals farmed globally(28)year, an estimated 50 billion of them will spend their entire lives on these factory farms where(29)like commodities. A substantial reduction in animal protein consumption across the globe(30)free up land and other resources, making room for higher welfare, more sustainable production systems, that would benefit both the animals and humans. Factory farming fuels climate change, pollutes landscapes and waterways, and wastefully uses precious resources. Animal agriculture is estimated to account for nearly 15% of global greenhouse gas emissions,(31)more than all the cars, planes, and other forms of transport put together. Decreasing the demand for animal products will help conserve water, save vital habitats, reduce greenhouse gas emissions and help conserve our Earth. Industrial animal agriculture is the(32)largest driver of habitat loss and deforestation worldwide. Adapted from: World Animal Protection in: https://www.worldanimalprotection.ca/news/why-we-should-eat-lessmeat [Accessed on March 06th, 2023].
Text 1 Why we should eat less meat Eating less meat is good for the planet, good(21)animals, and good for(22)health. Its not all or(23). While adopting a vegan or vegetarian diet is commendable, it(24)not be possible for everyone. The current demand for animal protein (meat, dairy, and eggs) is unsustainable. It enforces the(25)for intensive farming methods,(26)animals are inhumanely caged in small spaces that restrict their natural behaviours, selectively bred to grow(27)and pumped full of antibiotics unnecessarily. Of the 70+ billion animals farmed globally(28)year, an estimated 50 billion of them will spend their entire lives on these factory farms where(29)like commodities. A substantial reduction in animal protein consumption across the globe(30)free up land and other resources, making room for higher welfare, more sustainable production systems, that would benefit both the animals and humans. Factory farming fuels climate change, pollutes landscapes and waterways, and wastefully uses precious resources. Animal agriculture is estimated to account for nearly 15% of global greenhouse gas emissions,(31)more than all the cars, planes, and other forms of transport put together. Decreasing the demand for animal products will help conserve water, save vital habitats, reduce greenhouse gas emissions and help conserve our Earth. Industrial animal agriculture is the(32)largest driver of habitat loss and deforestation worldwide. Adapted from: World Animal Protection in: https://www.worldanimalprotection.ca/news/why-we-should-eat-lessmeat [Accessed on March 06th, 2023].
(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (21).
(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (22).
(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (23).
(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (24).
(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (25).
(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (26).
(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (27).
(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (28).
(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (29).
(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (30).
(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (31).