In critical discussion of artificial intelligence (AI), we hear the word “human” just as often as the word technology. Human-centered AI, humane technology, AI alignment with human values, the list goes on. While the term “human” in these phrases is used so broadly and variously that we are unlikely to agree on its meaning, all these terms indicate the strong will to manage the potentially uncontrollable technology for human interest. Human involvement is critical, but it does not automatically lead to humane technology. Ethics or responsibility teams play an important role in the ecosystem of AI companies, but what about the engineers, developers, and data scientists? Humanizing technology requires providing them with appropriate education, and this process should take place in higher education, preceding their career entry. We need to reconsider the education of computer science students so that our technology will develop with human values and advance our democratic values.
As a lecturer at Riga Business School, I have been teaching humanities subjects and soft skills to business and computer science students. I teach mostly online from the United States, hoping to bring American and global as well as cultural and historical perspectives from my academic training in American literature. My mission is to widen my students’ studies toward a well-rounded education and to teach them human needs for social cohesion. By using the humanities skillsets–critical thinking, questioning, close reading, textual/cultural/historical analysis–I strive to teach them the cultural competency needed for a diverse society. Some students seem to enjoy the fresh air that humanities subjects and approaches bring them in an otherwise largely technical curriculum.
This, however, does not lead to humanizing the tech field by itself. Simply teaching humanities subjects to computer science students does not achieve purposeful or meaningful interdisciplinary learning, nor does it humanize the learner. Students are often already passionate music lovers, writers, or dancers outside the university. Humanizing computer science means recognizing technology’s societal and human impact and safeguarding of what is human. This ultimately means protecting the human dignity of every member of society. Although, historically, technology has largely been an enabler of human progress, technologies such as rapidly adopted AI could potentially undermine people’s well-being and democratic values. My own humble approach in this humanizing effort took the form of creating the course “Humans and Machines,” as a response to a series of events that occurred a few years ago.
At the time, in Washington, D.C., the Facebook whistleblower, a data scientist called Frances Haugen, was claiming that the company prioritized profit over people by delivering harmful products for youth mental health. Many of the computer science students in my class were writing essays about the ethical concerns of AI and responded to this news with great interest. Simultaneously, encouraged by the degree of student concerns for ethics in the tech industry and faced with their lack of language and knowledge of discussing societal implications, I decided to design a course combining AI ethics, social science, and my area of interest, literature. The three areas complement each other to arrive at a complex understanding of humans and the intelligent machines.
Some readers might be wondering, “Why literature?” You may claim that the problem is best left to computer scientists and evidence-based facts. Some may even argue that technology alone can solve societal problems. Besides, humanities are a declining field often viewed as unfit for preparing students for their careers in today’s technology-led world. Increasingly, U.S. colleges are leaning toward pragmatism, cutting budgets for liberal arts degrees and prioritizing mathematics and computer science degrees instead. Yet, narrowing the study interests, with an exclusive focus on science and careerist education on college campuses, points to alarming outcomes.
Consider these two examples. Sam Bankman-Fried, the founder of the failed cryptocurrency exchange FTX, now facing prison time, is renowned for saying, “I am very skeptical of books.” He prefers a six-paragraph blog post to a book. One wonders whether this MIT graduate of Physics would have acted in the interest of his clients if, as the New York Times technology reporter David Streitfeld writes, he had “spent less time at math camp and more time in English class” because we sometimes find a moral compass in books (Nov. 2, 2023). In classic literature, we meet an overzealous, tormented scientist, Victor Frankenstein. Imagine if his vigorous studies of mathematics and natural philosophy of the Enlightenment—what we today call chemisty— at the University at Ingolstadt had been extended to humanities. I wonder if that could have helped avoid the tragic courses of his own and his poor creature’s lives. I wonder because literature often allows us to connect to other human experience and to perceive things from another person’s perspective, which helps build empathy and compassion. In characters, we sometimes acknowledge fallible human nature and reflect on our actions.
Empathy and compassion are the social skills we need to build a more democratic society where technology serves everyone equally, respecting everyone’s human dignity. They also become crucial academic skills and humanizing factors when incorporated into the educational curriculum. In our AI-powered world, computer scientists are uniquely positioned to design powerful products that can affect the quality of human life. Recent courageous research efforts in social science have shown, with real-life consequences, that algorithms are not perfect. Algorithmic bias, for example, could systematically disadvantage certain groups of people and perpetuate the existing inequality. The codes could put human dignity on the line by replacing human decision-making with discriminatory automation. When harm occurs or is predicted, could computer scientists feel or imagine the effects of algorithms with professional empathy, even though they may not be the ones directly affected? Could they refuse indifference and engage and continue to engage with the societal impact of their work? Would my students continue to ask the important ethical questions about AI as they do in our humanities classes? This is where the test of my humanist teaching lies.
Humanizing the computer science field should start in higher education. By teaching students the ethical framework, the recognition of the problems, the structural and historical understanding of how technology could wield power, as well as the social skills of empathy and compassion, we can bring our society closer to a more humane society where human dignity is equally respected. AI technology and education have become the touchstone for our authentic will and efforts toward this society. Meanwhile, in humanizing computer science studies through partnership with humanities studies, the latter can renew themselves with an innovative, novel mission. Through such mutual enhancement of the disciplines with a common purpose of social progress and well-being can emerge the computer scientist with professional responsibility and care for humanity.
Chiaki Sekiguchi Bems, Lecturer at Riga Business School