当前位置: 高中英语 /
  • 1. (2024高一上·漳州期末)  阅读理解

    "Hallucinate" — the Word of 2023

    Cambridge Dictionary has named "hallucinate" as the word of the year for 2023 — while giving it an added new meaning relating to AI (artificial intelligence) technology.

    The added Cambridge Dictionary definition (定义) reads: " When an AI hallucinates, it produces false information, which can vary (变化) from suggestions that seem perfectly believable to ones that are clearly non-sense. "

    Wendalyn Nichols, Cambridge Dictionary's publishing manager, said: " The fact that AIs can ‘hallucinate' reminds us that humans still need to bring their critical (批判的) thinking skills to the use of these tools. AIs can draw out specific information we need from huge amounts and piece it together. That's amazing. But they just stop there. The more original (原创的) you ask them to be, the likelier they are to go wrong. "

    Actually, at their best, AIs can only be as dependable as their training information. Humans' professional knowledge is more important than ever, to create the truthful and up-to-date information that AIs can be trained on.

    AIs can hallucinate in a confident and thus more misleading manner. Their influences have been shown in real-world examples. In Google's advertisement for its chatbot Bard, the AI tool made an error about the James Webb Space Telescope. A US law company used cases made up by AIs in court after using ChatGPT for legal research.

    "The widespread use of the word ‘hallucinate' to refer to mistakes by AIs offers us a quick look at how we're treating them as our equals," said Dr Henry, an AI ethicist at Cambridge University. "‘Hallucinate' is originally a verb suggesting someone experiencing a disconnect from reality," he continued. "It mirrors an unnoticeable change in perception (认知): the AI, not the user, is the one ‘ hallucinating '". It seems that as time progresses, psychological vocabulary will be further enlarged to describe the strange abilities of the new intelligences we're creating.

    1. (1) What can be learned about the false information AIs produce?
      A . It doesn't make any sense to us. B . We didn't know about it until 2023. C . It happens because of AIs' limitation. D . We invented a new word to describe it.
    2. (2) What may be the key to training AIs?
      A . The information they're offered. B . The way they're trained in. C . The human experts they work with. D . The thinking skills they use.
    3. (3) Why does the author mention Google's advertisement in paragraph 5?
      A . To introduce a new topic. B . To prove an established fact. C . To compare the effects AIs produce. D . To list AIs' different applications.
    4. (4) What is the last paragraph mainly about?
      A . The proper way to treat AIs. B . People's understanding of AIs. C . Comments on the new use of "hallucinate". D . The change in the meaning of "hallucinate".

微信扫码预览、分享更方便