Kuadro - O MELHOR CURSO PRÉ-VESTIBULAR
Kuadro - O MELHOR CURSO PRÉ-VESTIBULAR
MEDICINAITA - IMEENEMENTRAR
Logo do Facebook   Logo do Instagram   Logo do Youtube

Conquiste sua aprovação na metade do tempo!

No Kuadro, você aprende a estudar com eficiência e conquista sua aprovação muito mais rápido. Aqui você aprende pelo menos 2x mais rápido e conquista sua aprovação na metade do tempo que você demoraria estudando de forma convencional.

Questões - IME | Gabarito e resoluções

Questão 25
2022Física

(IME - 2022/2023 - 1 fase) A figura mostra um esquema com dois espelhos planos verticais presos a blocos que oscilam na mesma direo sobre uma superfcie horizontal sem atrito. Observa-se tambm a presena de uma partcula em repouso. Dados: amplitude da oscilao de cada bloco: A; massa de cada conjunto (bloco e espelho): m; constante elstica de cada mola: k. Observaes: cada espelho chega a encostar com velocidade nula na partcula em repouso, porm em instantes diferentes; quando o espelho da esquerda encosta na partcula em repouso, o espelho da direita est voltando, com a mola se comprimindo, e sua maior velocidade escalar. A maior distncia relativa entre as imagens da partcula nos espelhos e a maior velocidade escalar relativa entre elas so, respectivamente:

Questão 26
2022Inglês

(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (26).

Questão 26
2022Física

(IME - 2022/2023 - 1 fase) A figura acima mostra um aparato com uma barra de ao vertical, tendo sua extremidade superior presa ao teto e sua extremidade inferior encostada na ponta de uma gangorra em forma de L. Por sua vez, a gangorra tambm encosta em um apoio elstico, que preso na parede indicada. Aps a montagem do aparato, a barra de ao aquecida por igual. Dados: acelerao da gravidade: ; massa da barra de ao: ; comprimento da barra de ao: ; coeficiente de dilatao linear da barra de ao: ; coeficiente elstico do apoio: ; comprimento horizontal da gangorra: 2L; comprimento vertical da gangorra: ; variao de temperatura aps o aquecimento: . Observaes: a deformao da barra de ao aps a dilatao muito menor que ; o pino indicado na figura mantm-se fixo; antes do aquecimento, o apoio elstico est encostado na gangorra e sem energia potencial armazenada. Ao final do processo de aquecimento, o trabalho realizado pela fora peso da barra de ao e a energia potencial armazenada no apoio elstico so, respectivamente:

Questão 27
2022Inglês

(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (27).

Questão 27
2022Física

(IME - 2022/2023 - 1 fase) O circuito acima alimentado por uma fonte de 12 V. Todas os valores de resistncias apresentados encontram-se em . A potncia, em W, fornecida pela fonte :

Questão 28
2022Inglês

(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (28).

Questão 28
2022Física

(IME - 2022/2023 - 1 fase) Uma feixe de luz propaga-se na horizontal e atravessa uma rede de difrao disposta na vertical. Observaes: comprimento de onda da luz: ; nmero de fendas por centmetro da rede de difrao: 4000. Os valores mais prximos para os senos dos ngulos , indicados na figura, para 0, corres- pondentes aos dois primeiros pontos brilhantes projetados numa parede vertical so:

Questão 29
2022Física

(IME - 2022/2023 - 1 fase) Trs partculas carregadas, inicialmente em repouso no plano da pgina, esto posicionadas sobre uma regio do espao submetida a uma densidade de fluxo magntico uniforme , que aponta para dentro do plano da pgina. No instante , a partcula localizada no ponto 1 submetida a um impulso e descreve a trajetria indicada pela linha tracejada na figura, at ocorrer um choque perfeitamente inelstico com a partcula localizada no ponto 2. Pouco depois, outro choque perfeitamente inelstico ocorrer com a partcula localizada na posio 3. Dados: massa de cada partcula: m; carga da partcula inicialmente na posio 1: ; carga da partcula inicialmente na posio 2: ; carga da partcula inicialmente na posio 3: ; mdulo da densidade de fluxo magntico: ; intensidade do impulso: . Observaes: no h efeito gravitacional; o sinal de est em conformidade com a geometria da figura; todas as foras de repulso entre as partculas so desprezveis; a trajetria tracejada na figura composta pela unio de trs arcos de 1/4 de circunferncia. A distncia total percorrida pela partcula impulsionada desde a posio 1 at o ponto identificado como final :

Questão 29
2022Inglês

(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (29).

Questão 30
2022Inglês

(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (30).

Questão 30
2022Física

QUESTO ANULADA!! A figura mostra uma luneta, com foco ajustvel, que possui em sua extremidade uma lente plano convexade raio de curvatura R. O ndice de refrao da lente varia com o comprimento de onda da luz incidente de acordo com a expresso: A distncia do plano da lente at o fundo da luneta , com L de valor fixo e que pode ser ajustado, de forma a coincidir o ponto focal da lente sempre com o fundo, ou seja, onde se encontra o observador. A luneta deve possibilitar que se foquem as luzes de comprimento de onda no intervalo . Dados: raio de curvatura da superfcie convexa da lente: R = 7 cm; constante A da expresso: 1,5; constante B da expresso: ; ndice de refrao esquerda e direita da lente: 1. Observao: a lente plana esquerda e convexa direita; Sabendo que pode ser nulo, o seu maior valor possvel , em cm, aproximadamente: (A) 0,35 (B) 0,70 (C) 1,40 (D) 3,50 (E) 7,00 QUESTO ANULADA!!

Questão 31
2022Inglês

(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (31).

Questão 31
2022Química

(IME - 2022/2023 - 1 fase)Em uma clula voltaica a energia de Gibbs padro de reao determinada pela expresso: em que um nmero adimensional que representa a quantidade de mols de eltrons transferidos nas semirreaes de oxidao e de reduo combinadas, a constante de Faraday e o potencial-padro da clula. Considere e os potenciais-padro de reduo do ferro e do alumnio, a 298 K, indicados abaixo. Para uma clula voltaica formada pelo contato de dois metais quando uma pea de ferro fixada com parafusos de alumnio, a 298 K, avalie as asseres a seguir. I. O valor numrico de 5. II. Com o passar do tempo, a pea fixada ir cair devido corroso do ferro. III. Com o passar do tempo, a pea fixada ir cair devido corroso do alumnio. IV. A energia de Gibbs padro de reao da clula igual a 706 kJ/mol. V. Na clula voltaica formada, a oxidao do ferro um processo espontneo. Assinale a opo que apresenta APENAS as afirmativas verdadeiras.

Questão 32
2022Química

(IME - 2022/2023 - 1 fase) Uma mistura gasosa ideal de de oxignio e de acetileno est contida em um vaso fechado de volume invariante, a e . Essa mistura entra em combusto e reage completamente, produzindo e . Depois, deixa-se o meio resfriar por um certo tempo, de forma que os produtos continuam gasosos, e a presso final medida de . correto afirmar que:

Questão 32
2022Inglês

(IME 2022/2023 - 2 fase) NAS QUESTES DE 21 A 32, ESCOLHA A OPO QUE COMPLETA O TEXTO 1 CORRETAMENTE. Text 1 XAI-Explainable artificial intelligence Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a ___(21)___ range of fields. However, many of these systems are not able to explain their ___(22)___ decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners. Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. The ___(23)___ of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on. However, every explanation is set within a context that depends ___(24)___ the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial. Models that are fully interpretable give full and completely ___(25)___ explanations. Models that are partially interpretable reveal important pieces of their ___(26)___ process. Interpretable models obey interpretability constraints that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps. XAI assumes that an explanation is ___(27)___ to an end user who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often ___(28)___ different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to ___(29)___ the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take thetarget user group of the system into account, who might vary in their background knowledge and needs for what should be explained. A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the users point of view, such as user ___(30)___ which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanations effectiveness might be task performance, i.e., does the explanation improve the users decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation. (. . . ) From a human-centered research perspective, research on competencies and knowledge could take XAI ___(31)___ the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future ___(32)___ XAI is just beginning. Adapted from: Science Robotics in [Accessed on 15th April 2022]. Considere o texto a seguir e escolha a alternativa que completa a lacuna (32).