This is an incredible moment for NLP. We all routinely work with models whose capabilities would have seemed like science fiction just two decades ago, powerful organizations eagerly await our latest results, and NLP technologies are playing an increasingly large role in shaping our society. As a result, all of us in the NLP community are likely to participate in research that will contribute (to varying degrees and perhaps only indirectly) to technologies that will impact many people's lives, with both positive and negative consequences - for example, technologies that broaden accessibility, enhance creative self-expression, heighten surveillance, and create propaganda. What can we do to fulfill the social responsibility that this brings? As a (very) partial answer to this question, I will review a number of important recent developments, spanning many research groups, concerning dataset creation, model introspection, and system assessment. Taken together, these ideas can help us more reliably characterize how NLP systems will behave, and more reliably communicate this information to a wider range of potential users. In this way, they can help us meet our obligations to the people whose lives are impacted by the results of our research.
展开▼