As Artificial Intelligence (A) becomes increasingly sophisticated,there are growing concerns that robots could become a threat.This danger can be avoided,according to computer science professor Stuart Russell,if we figure out how to turn human values into a programmable code.
Russell argues that as robots take on more complicated tasks,it’s necessary to translate our morals into AI language.
For example,if a robot does chores around the house,you wouldn't want it to put the pet cat in the oven to make dinner for the hungry children. "You would want that robot preloaded with a good set of values," said Russell.
Some robots are already programmed with basic human values.For example,mobile robots have been programmed to keep a comfortable distance from humans.Obviously there are cultural differences,but if you were talking to another person and they came up close in your personal space,you wouldn't think that's the kind of thing a properly brought-up person would do.
It will be possible to create more sophisticated moral machines,if only we can find a way to set out human values as clear rules.
Robots could also learn values from drawing patterns from large sets of data on human behavior.They are dangerous only if programmers are careless.
The biggest concern with robots going against human values is that human beings fail to do sufficient testing and they've produced a system that will break some kind of taboo(禁忌).
One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation.
If the robot is unsure whether an animal is suitable for the microwave,it has the opportunity to stop,send out beeps,and ask for directions from a human.If we humans aren't quite sure about a decision,we go and ask somebody else.
The most difficult step in programming values will be deciding exactly what we believe in moral,and how to create a set of ethical rules.But if we come up with an answer,robots could be good for humanity.
1.What does the author say about the threat of robots?
A. It may be a challenge to computer programmers.
B. It accompanies all machinery involving high technology.
C. It can be avoided if human values are translated into their language.
D. It has become an inevitable danger as technology gets more sophisticated.
2.How do robots learn human values?
A. By interacting with humans in everyday life situations.
B. By following the daily routines of civilized human beings.
C. By picking up patterns from massive data on human behavior.
D. By imitating the behavior of properly brought-up human beings.
3.What will a well-programmed robot do when facing an unusual situation?
A. Keep a distance from possible dangers.
B. Do sufficient testing before taking action.
C. Set off its built-in alarm system at once.
D. Stop to seek advice from a human being.
4.What is most difficult to do when we turn human values into a programmable code?
A. Determine what is moral and ethical.
B. Design some large-scale experiments.
C. Set rules for man-machine interaction.
D. Develop a more sophisticated program.
高三英语阅读理解困难题
As Artificial Intelligence becomes increasingly complicated, there are growing concerns that robots could become a threat. This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code.
Russell argues that as robots take on more complicated tasks, it’s necessary to translate our morals into AI language.
For example, if a robot does chores around the house, you wouldn't want it to put the pet cat in the oven to make dinner for the hungry children. "You would want that robot preloaded with a good set of values," said Russell.
Some robots are already programmed with basic human values. For example, mobile robots have been programmed to keep a comfortable distance from humans. Obviously, there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn't think that's the kind of thing a properly brought-up person would do.
It will be possible to create more complex moral machines, if only we can find a way to set out human values as clear rules.
Robots could also learn values from drawing patterns from large sets of data on human behavior. They are dangerous only if programmers are careless.
The biggest concern with robots going against human values is that human beings fail to do enough testing and they've produced a system that will break some kind of taboo(禁忌).
One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation.
If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps, and ask for directions from a human. If we humans aren't quite sure about a decision, we go and ask somebody else.
The most difficult step in programming values will be deciding exactly what we believe in moral, and how to create a set of ethical rules. But if we come up with an answer, robots could be good for humanity.
1.What does the author say about the threat of robots?
A. It may be a challenge to computer programmers.
B. It accompanies all machinery involving high technology.
C. It can be avoided if human values are translated into their language.
D. It has become an inevitable danger as technology gets more sophisticated.
2.How do robots learn human values?
A. By interacting with humans in everyday life situations.
B. By picking up patterns from massive data on human behavior.
C. By following the daily routines of civilized human beings.
D. By imitating the behavior of properly brought-up human beings.
3.What will a well-programmed robot do when facing an unusual situation?
A. Keep a distance from possible dangers.
B. Do enough testing before taking action.
C. Set off its built-in alarm system at once.
D. Stop to seek advice from a human being.
4.What is most difficult to do when we turn human values into a programmable code?
A. Determine what is moral and ethical.
B. Design some large-scale experiments.
C. Set rules for man-machine interaction.
D. Develop a more sophisticated program.
高三英语阅读理解中等难度题查看答案及解析
As Artificial Intelligence (AI) becomes increasingly sophisticated, there are growing concerns that robots could become a threat. This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code.
Russell argues that as robots take on more complicated tasks, it's necessary to translate our morals into AI language.
For example, if a robot does chores around the house, you wouldn't want it to put the pet cat in the oven to make dinner for the hungry children. “You would want that robot preloaded with a good set of values,” said Russell.
Some robots are already programmed with basic human values. For example, mobile robots have been programmed to keep a comfortable distance from humans. Obviously there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn't think that's the kind of thing a properly brought-up person would do.
It will be possible to create more sophisticated moral machines, if only we can find a way to set out human values as clear rules.
Robots could also learn values from drawing patterns from large sets of data on human behavior. They are dangerous only if programmers are careless.
The biggest concern with robots going against human values is that human beings fail to do sufficient testing and they've produced a system that will break some kind of taboo(禁忌).
One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation.
If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps(嘟嘟声), and ask for directions from a human. If we humans aren't quite sure about a decision, we go and ask somebody else.
The most difficult step in programming values will be deciding exactly what we believe in moral, and how to create a set of ethical rules. But if we come up with an answer, robots could be good for humanity.
1.What does the author say about the threat of robots?
A.It may constitute a challenge to computer programmers.
B.It accompanies all machinery involving high technology.
C.It can be avoided if human values are translated into their language.
D.It has become an inevitable peril as technology gets more sophisticated.
2.What would we think of a person who invades our personal space according to the author?
A.They are aggressive. B.They are outgoing.
C.They are ignorant. D.They are ill-bred.
3.How do robots learn human values?
A.By interacting with humans in everyday life situations.
B.By following the daily routines of civilized human beings.
C.By picking up patterns from massive data on human behavior.
D.By imitating the behavior of property brought-up human beings.
4.What will a well-programmed robot do when facing an unusual situation?
A.Keep a distance from possible dangers. B.Stop to seek advice from a human being.
C.Trigger its built-in alarm system at once. D.Do sufficient testing before taking action.
高三英语阅读理解中等难度题查看答案及解析
As Artificial Intelligence (A) becomes increasingly sophisticated,there are growing concerns that robots could become a threat.This danger can be avoided,according to computer science professor Stuart Russell,if we figure out how to turn human values into a programmable code.
Russell argues that as robots take on more complicated tasks,it’s necessary to translate our morals into AI language.
For example,if a robot does chores around the house,you wouldn't want it to put the pet cat in the oven to make dinner for the hungry children. "You would want that robot preloaded with a good set of values," said Russell.
Some robots are already programmed with basic human values.For example,mobile robots have been programmed to keep a comfortable distance from humans.Obviously there are cultural differences,but if you were talking to another person and they came up close in your personal space,you wouldn't think that's the kind of thing a properly brought-up person would do.
It will be possible to create more sophisticated moral machines,if only we can find a way to set out human values as clear rules.
Robots could also learn values from drawing patterns from large sets of data on human behavior.They are dangerous only if programmers are careless.
The biggest concern with robots going against human values is that human beings fail to do sufficient testing and they've produced a system that will break some kind of taboo(禁忌).
One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation.
If the robot is unsure whether an animal is suitable for the microwave,it has the opportunity to stop,send out beeps,and ask for directions from a human.If we humans aren't quite sure about a decision,we go and ask somebody else.
The most difficult step in programming values will be deciding exactly what we believe in moral,and how to create a set of ethical rules.But if we come up with an answer,robots could be good for humanity.
1.What does the author say about the threat of robots?
A. It may be a challenge to computer programmers.
B. It accompanies all machinery involving high technology.
C. It can be avoided if human values are translated into their language.
D. It has become an inevitable danger as technology gets more sophisticated.
2.How do robots learn human values?
A. By interacting with humans in everyday life situations.
B. By following the daily routines of civilized human beings.
C. By picking up patterns from massive data on human behavior.
D. By imitating the behavior of properly brought-up human beings.
3.What will a well-programmed robot do when facing an unusual situation?
A. Keep a distance from possible dangers.
B. Do sufficient testing before taking action.
C. Set off its built-in alarm system at once.
D. Stop to seek advice from a human being.
4.What is most difficult to do when we turn human values into a programmable code?
A. Determine what is moral and ethical.
B. Design some large-scale experiments.
C. Set rules for man-machine interaction.
D. Develop a more sophisticated program.
高三英语阅读理解困难题查看答案及解析
As Artificial Intelligence(AI) becomes increasingly sophisticated, there are growing concerns that robots could become a threat. This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code.
Russell argues that as robots take on more complicated tasks, it’s necessary to translate our morals into AI language.
For example, if a robot does chores around the house, you wouldn’t want it to put the pet cat in the oven to make dinner for the hungry children. “You would want that robot preloaded with a good set of values,” said Russell.
Some robots are already programmed with basic human values. For example, mobile robots have been programmed to keep a comfortable distance from humans. Obviously there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn’t think that’s the kind of thing a properly brought-up person would do.
It will be possible to create more sophisticated moral machines, if only we can find a way to set out human values as clear rules.
Robots could also learn values from drawing patterns from large sets of data on human behavior. They are dangerous only if programmers are careless.
The biggest concern with robots going against human values is that human beings fail to so sufficient testing and they’ve produced a system that will break some kind of taboo(禁忌).
One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation.
If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps(嘟嘟声), and ask for directions from a human. If we humans aren’t quite sure about a decision, we go and ask somebody else.
The most difficult step in programming values will be deciding exactly what we believe in moral, and how to create a set of ethical rules. But if we come up with an answer, robots could be good for humanity.
1.What does the author say about the threat of robots?
A. It may constitute a challenge to computer progranmers.
B. It accompanies all machinery involving high technology.
C. It can be avoided if human values are translated into their language.
D. It has become an inevitable peril as technology gets more sophisticated.
2.What would we think of a person who invades our personal space according to the author?
A. They are aggressive. B. They are outgoing.
C. They are ignorant. D. They are ill-bred.
3.How do robots learn human values?
A. By interacting with humans in everyday life situations.
B. By following the daily routines of civilized human beings.
C. By picking up patterns from massive data on human behavior.
D. By imitating the behavior of property brought-up human beings.
4.What will a well-programmed robot do when facing an unusual situation?
A. keep a distance from possible dangers.
B. Stop to seek advice from a human being.
C. Trigger its built-in alarm system at once.
D. Do sufficient testing before taking action.
5.What is most difficult to do when we turn human values into a programmable code?
A. Determine what is moral and ethical.
B. Design some large-scale experiments.
C. Set rules for man-machine interaction.
D. Develop a more sophisticated program.
高三英语阅读理解中等难度题查看答案及解析
An increasing part of the world is becoming artificially lit. Artificial light is often seen as a sign of progress: the march of civilization shines a light in the dark; it takes back the night. But some scientists argue that unnaturally bright nights are bad not just for astronomers but also for nocturnal (夜间的) animals and even for human health.
Now research shows the night is getting even brighter. From 2012 to 2016 the earth’s artificially lit area expanded by about 2.2 percent a year, according to a study published last November in Science Advances. However, the measurement does not include light from most of the energy–efficient LED lamps that have been replacing sodium-vapor (钠气灯) technology in cities all over the world ,says Christopher Kyba, a postdoctoral researcher at the German Research Center for Geosciences in Potsdam.
The new data came from a NASA satellite instrument called the Visible Infrared Imaging Radiometer Suite (VIRS). It can measure long wavelengths of light, such as those produced by traditional yellow-and-orange sodium-vapor street lamps. But VIIRS cannot see the short-wavelength blue light produced by white LEDs. This light has been shown to disturb human sleep cycles and nocturnal animals’ behavior.
The team believes the ongoing switch to LEDs caused already bright countries such as Italy, the Netherlands, Spain and the U.S. to register as having stable levels of lighting in the VIIRS data. In contrast, most nations in South America, Africa and Asia brightened, suggesting increases in the use of traditional lighting.
In 2016, a study showed that one third of the world’s population currently lives under skies too bright to see the Milky Way at night. Between 2012 and 2016 the median nation pumped out 15 percent more long-wavelength light as its GDP increased by 13 percent. Overall, counties' total light production correlated with their GDP.
1.Which of the following can best describe artificial light?
A.Convenient but unnatural. B.Useful but energy-consuming.
C.Progressive but uncomfortable. D.Civilized but harmful.
2.What can we know about the already bright countries?
A.Traditional lighting is not used in those countries.
B.LED lights are increasingly used in those countries.
C.Efforts to reduce harmful light work in those countries.
D.People do enjoy stable lighting in those countries.
3.Why does the author mention “the median nation” in the last paragraph?
A.To show artificial light has an association with GDP.
B.To demonstrate GDP plays an important part in the median nation.
C.To stress the median nation was to blame for the light problem.
D.To suggest artificial light should be banned in the future.
4.Where is the passage most probably taken from?
A.A biology textbook. B.A book review.
C.A science magazine. D.A science fiction.
高三英语阅读理解困难题查看答案及解析
Governments around the world increasingly ________ artificial intelligence to help promote economic growth.
A. put out B. roll out C. make out D. reach out
高三英语单项填空中等难度题查看答案及解析
Over the past five years, researchers in artificial intelligence have become the rock stars of the technology world. A branch of AI known as deep learning, has proven so useful that skilled operators can command six-figure salaries to build software for Amazon, Apple, Facebook and Google. The top names can earn over $1 million a year.
The traditional way to get these jobs has been a Doctor’s degree in computer science from one of America’s top universities. Earning one takes years and requires a person who can be devoted to study, which is rare among normal people. Moreover, graduate students are regularly attracted away from their studies by various high-paid jobs.
That is changing. Last month Fast.ai, an education non-profit based in San Francisco, kicked off the third year of its course in deep learning. Since its beginning it has attracted more than 100,000 students from India to Nigeria. The course comes with a simple idea: there is no need to spend years obtaining a Doctor’s degree in order to practise deep learning. Fast.ai’s course can be completed in just seven weeks.
For example, a graduate from Fast.ai’s first year, Sara Hooker, was hired into Google’s highly competitive AI residency programme after finishing the course, having never worked on deep learning before. She is now a founding member of Google’s new AI research office in Accra, Ghana, the firm’s first in Africa.
To make it accessible to anyone who wants to learn how to build AI software, Jeremy Howard, who founded Fast.ai with Rachel Thomas, a mathematician, says middle school mathematics is enough. Fast.ai is not the only A.I. programme. AI4ALL, another non-profit organization, founded by leading technologists including Dr. Fei-Fei Li, works to bring AI education to schoolchildren that would otherwise not have access to it.
Howard’s ambitions run deeper than just dealing with the shortage in the AI labour market. His aim is to spread deep learning into many hands, so that it may be applied in as many fields as possible. The ambition, says Mr Howard, is for AI training software to become as easy to use and common as sending an email on a smart phone.
1.What’s Paragraph 2 mainly about?
A. The way to get a Doctor’s degree.
B. The difficulties to get a Doctor’s degree.
C. The importance to get a Doctor’s degree.
D. The necessity to get a Doctor’s degree.
2.What can we learn about Fast.ai?
A. It aims to produce AI graduates in a fast way.
B. It aims to collect money for poor students.
C. It charges a high free for offering courses.
D. It becomes popular only in India and Nigeria.
3.Where does Sara Hooker work according to the passage?
A. India. B. Nigeria.
C. Ghana. D. America.
4.What do Fast.ai and AI4ALL have in common?
A. They are both meant for children.
B. They require advanced math.
C. They have the same founder.
D. They are both non-profit.
5.What’s Howard’s attitude to AI training software in the future?
A. Anxious. B. Disappointed.
C. Optimistic. D. Surprised.
高三英语阅读理解困难题查看答案及解析
Artificial intelligence can ide ntify skin cancer in photographs with the same accuracy as trained doctors, say scientists. The Stanford University team said the findings were "incredibly exciting" and would now be tested in clinics. Eventually, they believe using AI could revolutionize healthcare by turning anyone’s smart-phone into a cancer scanner.
The AI was repurposed from software developed by Google that had learned to spot the difference between images of cats and dogs. It was shown 129,450 photographs and told what type of skin condition it was looking at in each one.
It then learned to spot the hallmarks of the most common type of skin cancer: carcinoma, and the most deadly: melanoma(黑色素瘤). Only one in 20 skin cancers are melanoma, yet the tumor(肿瘤) accounts for three-quarters of skin cancer deaths.
The experiment, detailed in the journal Nature, then tested the AI against 21 trained skin cancer doctors. One of the researchers, Dr Andre Esteva, told the BBC News website: "We find excitedly, in general, that we are on par with excellent skin cancer doctors."
However, the computer software cannot make a full diagnosis, as this is normally confirmed with a tissue biopsy(活检). Dr Esteva said the system now needed to be tested alongside doctors in the clinic. "The application of AI to healthcare is, we believe, an incredibly exciting area of research that can be leveraged to achieve a great deal of societal good," he said. "One particular route that we find exciting is the use of this algorithm on a mobile device, but to achieve this we would have to build an app and test its accuracy directly from a mobile device." Incredible advances in machine-learning have already led to AI beating one of humanity's best Go players.
And a team of doctors in London have trained AI to predict when the heart will fail.
1.From the passage we can infer that ________.
A. Artificial Intelligence must replace human one day
B. we can use Artificial Intelligence to cure skin cancers
C. we can use smart-phone to scan our skin at present
D. the research will be of great help to us and our health care
2.Which one will he agree with according to Dr Esteva?
A. Artificial Intelligence has beaten all of humanity’s best Go players.
B. Artificial Intelligence could support assessments by GPs.
C. We still need professional doctors with the help of the system.
D. There are too many disadvantages for Artificial Intelligence.
3.The underlined words “on par with” in Para 4 likely mean ________.
A. inferior to B. equaled by C. superior to D. opposite to
4.What’s probably the best title of this passage?
A. Cancer Doctors Are Out
B. An APP Scanning Skin Cancers
C. Artificial Intelligence—change our future
D. Artificial Intelligence—as good as cancer doctors
高三英语阅读理解困难题查看答案及解析
As researchers learn more about how children's intelligence develops, they are increasingly surprised by the power of parents.The power of the school has been replaced by the home.To begin with, all the factors which are part of intelligence—the child's understanding of language,learning patterns, curiosity—are established well before the child enters school at the age of six.Study after study has shown that even after school begins, children's achievements have been far more influenced by parents than by teachers.This is particularly true about learning that is languagerelated.The school rather than the home is given credit for variations in achievement in subjects such as science.
In view of their power, it's sad to see so many parents not making the most of their child's intelligence.Until recently parents had been warned by educators who asked them not to educate their children.Many teachers now realize that children cannot be educated only at school and parents are being asked to contribute both before and after the child enters school.
Parents have been particularly afraid to teach reading at home.Of course,children shouldn't be pushed to read by their parents, but educators have discovered that reading is best taught individually—and the easiest place to do this is at home.Many fourand fiveyearolds who have been shown a few letters and taught their sounds will compose single words of their own with them even before they have been taught to read.
1.What have researchers found out about the influence of parents and the school on children's intelligence?
A.Parents have greater influence than the school.
B.The school plays a greater role than parents.
C.Parents and the school have the equal power.
D.Neither parents nor the school has any influence.
2.Researchers conclude that________.
A.it is with the help of the teachers that children have an understanding of
language
B.curiosity is formed after the children enter school
C.children's learning patterns are developed at the early age
D.only the school can give children the opportunity to make achievements
3.According to the text, in which area may school play a more important role?
A.Moral education. B.Language education.
C.Physical education. D.Science education.
4.Many parents fail to make the most of their children's intelligence because
________.
A.they usually push the children to read at home
B.they only teach them after they enter school
C.they teach them in a wrong way at home
D.they were told by educators not to teach their children
高三英语阅读理解中等难度题查看答案及解析
As the intelligence of robots increases to match ________ of humans,we may use them to expand our frontiers.
A.it B.that C.which D.the
高三英语单项填空简单题查看答案及解析