As for most technology industries, 2018 has been the year of artificial intelligence calculations. Because the AI system has been integrated into more products and services, the lack of technology is becoming clearer. Researchers, companies, and the general public are starting to grapple with the limitations of AI and the effect is detrimental, asking important questions such as: how is this technology used, and for whose interests?
This calculation is most seen as a parade of negative news headlines about algorithmic systems. This year witnessed the first death caused by a car that could drive itself; Cambridge Analytica scandal; allegations that Facebook facilitates genocide in Myanmar; revelation that Google helps the Pentagon train drone surveillance devices; and ethical questions for the assistants of AI that sound human from the tech giant. The AI Now research group describes 2018 as the year of the "cascading scandal" for the field, and that is an accurate, if disappointing, summary.
But there is no need to look at this headline as only negative. However, scandals are better than unknown crimes, and in theory, controversy can help us to improve.
Take face recognition. This has become one of the fastest moving technologies in 2018, with success, as Chinese police identify criminals at music concerts, and broadcasters use technology to identify guests in royal marriages, but also serious problems, including bias, false positives, and mistakes. other potentially life-changing. Police forces around the world have begun to use facial recognition in the wild even though research after research shows serious weakness, and the authoritarian potential of the technology has become very clear in China where it is one of many tools used to suppress the Uighur minority.
All of this is not readable, but as a result of this controversy the company began to build tools to combat the problem of bias, and large technology companies such as Microsoft now openly called for regulation of face recognition. To read this news positively, more controversy means supervision, and – in the long run – more solutions.
And despite this scandalous scandal, 2018 also saw dozens of, hundreds, of hopeful and positive dissemination of machine learning and AI. There is a small victory, anywhere from astronomy, where machine learning finds new craters on the moon and ignores exoplanets; for fundamental scientific research, such as using AI to develop stronger metals and plastics; and health services, where there are many examples of AI systems that are able to find disease more quickly and accurately than humans. New tools such as plug-and-play machine learning services from Google and Amazon, and learning courses that can be accessed from organizations such as Fast.ai have put artificial intelligence into more hands, and the results are mostly useful and often inspiring.
This success does not compensate for greater failure, but simultaneously they show that AI is a complex field. It does not move in a single moral direction, but, like all technologies, has been taken by a variety of players who use it for various results.
Seeing throughout the year as a whole one lesson stands out: AI is not magic. This is not a two-letter spell that can be used to call venture capital and institutional trust; also not fairy dust that can be sown on products and institutions for instant repairs. Artificial intelligence is a process: something to check, consider, and – if everything goes well – understood. In other words, the calculation may continue.
Final Value: B
Report card Verge 2018: AI
- AI tools become more accessible
- Countless use cases are found in various fields
- The technology that changed the world, just started to move quickly
Need to be Improved
- Potential to increase supervision and assist authoritarian countries
- Large technology companies and governments deploy AI systems first, asking questions later
- Everything will end with tears (maybe)