This Drug Enables Breakthrough Organ Transplants
A new drug developed by a 51画鋼 alums biotech company helped facilitate the worlds first pig kidney transplant.
The Future of AI
Artificial intelligence is poised to transform society. How do we develop it safely?
When the company OpenAI released an artificial intelligence program called ChatGPT in 2022, it represented a drastic change in how we use technology. People could suddenly have a conversation with their computer that felt a lot like talking to another person, but that was just the beginning. AI promised to upend everything from how we write programming code and compose music to how we diagnose sick people and design new pharmaceutical remedies.
The possibilities were endless. AI was poised to transform humanity on a scale not seen since the Internet achieved wide-scale adoption three decades earlier. And like the dot-com craze before it, the AI gold rush has been dizzying. Tech companies have raced to offer us AI services, with massive corporations like Microsoft and Alphabet gobbling up smaller companies. And Wall Street investors have joined the frenzy. For instance, Nvidia, the company that makes about 80 percent of the high-performance computer chips used in AI, hit a market capitalization of $2 trillion in March, making it the third most valuable company on the planet.
But amid all this excitement, how can we make sure that AI is being developed in a responsible way? Is artificial intelligence a threat to our jobs, our creative selves, and maybe even our very existence? We put these questions to four members of the Boston College computer science departmentprofessors William Griffith, Emily Prudhommeaux, George Mohler, and Brian Smithas well as Gina Helfrich 03 of the University of Edinburghs Centre for Technomoral Futures, which studies the ethical implications of AI and other technologies.
This conversation has been lightly edited for clarity and length. Helfrich was interviewed separately, with her comments added into the conversation.油油
We constantly hear about the wonders of AI, but what questions should we be asking about it?油
William Griffith: If you think back to social media, it actually changed the way we operate and interact. Im wondering how AI will possibly either extend that or go in a different direction. We should look at AI from many ethical perspectives, such as justice, responsibility, duty, and so on. My sense is that is the way to think about most of the challenges that confront us, not only technologically but socially油and environmentally.
Emily 永姻顎糸h看馨馨艶温顎恰: One of the big issues is going to be authenticity. When media, images, language, or speech are created through artificial intelligence, its getting to the point where its so good that its difficult to know if that product was produced by a human or by artificial intelligence. Thats one油of the big things that people are struggling with right nowhow to educate people so that they can tell the difference, because its going to get more difficult.油
George Mohler: The question I find interesting is, is this an immediate existential threat or is that kind of overhyped? And if you look at the experts who invented this technology, theyre actually split. Some of them believe that in twenty years we could have artificial intelligence thats smarter than humans. And then the other segment of AI researchers believe were very far from that.油
京姻庄温稼油Smith: One of the first things that came out was the ethics of how people are behaving with these things. How will students, schools, teachers, faculty members deal with a machine that can essentially just do your homework? The problem is, people were going, AI is this new thing, and were going to be scared of it. But the reality is, its really academic integrity thats the issue. So there is kind of a value system around academic integrity that has to come in before we start thinking about the technical pieces of things.
永姻顎糸h看馨馨艶温顎恰: I think most students are using ChatGPT to guide them. And I dont think many students are wholesale copying text from ChatGPT and popping it in a Word document and submitting it to their class. But I have noticed that I can tell when something was written by ChatGPT because it sounds really dumb in some way. It sounds like it was written by a team of marketing executives.
So how do we promote academic integrity in the age of ChatGPT?油
Gina Helfrich 03: I dont know that professors and university leaders have a great answer yet. Its all still so new. People are still being extraordinarily creative in the ways that theyre coming up with to use these tools. But the companies who created the tools didnt have a clear vision of what they should be for in the first place. I dont think that its helpful to assume that all students want to cheat on their essays. Its more interesting to look at reasons that students choose to cheat or plagiarize, as opposed to singling out AI as somehow special. That being said, theres this feeling that to stay on the cutting edge, universities should welcome the use of generative AI [which can be instructed by a person to create original pieces of writing, videos, images, etc.]. Yet, so much of what happens in the classroom is still left up to the individual instructor, and some instructors will say, Yeah, go to town, use generative AI. We dont mind. And others will say, absolutely not. It must be very interesting from a student point of view to have polar opposite expectations and experiences around these tools, and I genuinely dont know how theyre navigating it. My sense is that university leaders are really scrambling to try to figure out what line they should take on these tools.
William Griffith
Associate Professor of the Practice in the 51画鋼 Computer Science Department
Griffith was previously associate director of the 51画鋼 Computing Center and studies the ethics and mindful uses of technology. He is a licensed clinical psychologist.
How else is AI going to shape the development of our children?油
Griffith: How this technology will affect kids cognitively, emotionally, and in terms of their education is going to be a serious issue. You can invent personalities, you can invent things in more realistic ways than ever before, and kids will figure out how to use this technology. I have great concerns about the development of children and the presence of this software.
Of course, its not just higher ed. Corporate America, Wall Street, the military, and so many other sectors are also struggling with these questions. Should the government step in and regulate AI?
Mohler: Theres so many different types of AI that each type would have its own issues and avenues for regulation. For example, with chatbots like ChatGPT or Llama, the issue is more around copyright issuesthey are trained by using other peoples dataand what to do about that. Some people have said, Oh, we should stop training those models. That doesnt make sense to me. It makes sense for people and scientists to be able to investigate the models and then to figure out the copyright issues. On the other end of the spectrum, you have things like autonomous weapons for military use. Thats not going to be regulated by the UStheres going to need to be some international treaties. Then there are technologies like autonomous vehicles or medical treatments that will need some sort of regulation.
永姻顎糸h看馨馨艶温顎恰: I was recently reviewing papers for our main professional conference, and I read several that were proposing chatbots for mental health therapy. And for every single paper, there was one reviewer who was like, I think this is not necessarily an ethical application of AI, to replace a human with a machine for a vulnerable person whos experiencing a mental health emergency. Thats something I can imagine being regulated relatively easily by the government. Im teaching a criminal justice class right now, and one of the problems were looking at油is dealing with recidivism, and how do you predict that? Can a person do a better job at predicting whether someone will commit another crime when they are let out of prison? Can a computer do a better job with that? And thats something I can imagine being regulated, too. But some of the things that they want to regulate are more complicatedlike, how do you force AI to not tell someone how to make a bomb if thats what they request? There are all these things you can trick AI into doing for you and it will provide really good, accurate information. How is a company supposed to prevent those things from happening within their software? I think a lot of that kind of regulation would be very difficult to implement.
Helfrich: Historically, weve seen when there are innovations of various kinds, it can take a while for the gears of government to catch up. But ultimately, I think the public does expect that the government will step in and make sure that things that are being advertised and sold to the public are not going to be grossly harmful. I think were getting to that point now where governments around the world are catching up to this big change in the past few years around AI and starting to institute some much-needed regulations. Im sure it is ultimately going to be an iterative process. Maybe well have this first iteration of the regulations and well find the ways that its working and the ways that maybe its not working and come back and make changes so that it works better.油
Gina Helfrich 03
Manager of the University of Edinburghs Centre for Technomoral Futures
Helfrichs work is focused on the ethical implications of development in artificial intelligence, machine learning, and other data-driven technologies. A PhD, she is also the deputy chair of the University of Edinburghs AI and Data Ethics Advisory Board.
Its been reported that AI has been used to select the targets of drone attacks. Who bears responsibility when AI makes mistakes during wartime?
Helfrich: The topic of whos responsible is huge in thinking about ethical AI. The researcher Madeleine Clare Elish came up with the concept of the moral crumple zone. A crumple zone on a car is designed to take the impact in a crash, so that it protects the person and passengers in the vehicle. The moral crumple zone is essentially the nearest human who can be blamed for whatever is happening with regards to the computer. Keeping with the theme of cars, think about a car like a Tesla that is in a self-driving mode when it gets into a crash. We say this self-driving car crashed. Who should we hold responsible? Well, the person who put the car into the self-driving mode, right? Thats the nearest person that we can assign that responsibility to, so theyre in the moral crumple zone. Its definitely something to be concerned about, because that can be a way of letting some of the companies that are pushing AI tools off the hook. At the same time, there are also decision makers in the organizations that use AI tools developed by tech companies. Those people also need to be held responsible and accountable for any mistakes. If were talking about a military use, for example, there has to be someone in the military brass who made the call to say, Were going to delegate these targeting decisions to a machine. If the machine makes mistakes, who decided that the machine is the one that should make those choices? The question of collective accountability and responsibility around AI tools is something that we have to keep in mind, because theyre so complex, and because the process that goes into their development and deployment goes through many, many hands.
Griffith: Using AI in warfare has complex, multilevel ethical and political implications, ranging from the international to the individual level. When can AI make decisions autonomously, if at all, and when will human intervention be required?油 It also raises the question: Can a machine be programmed with human ethical decision-making ability? The challenge for policy makers is to develop well-thought-out legal and ethical standards that will be applied individually and internationally. People say, Well, it was the software that was the problem, and you cant go after the programmers. I think that some of these programmers ought to be like licensed engineers, in the sense that you wouldnt go on the Tappan Zee bridge if it was built by people who werent licensed engineers. The software industry needs to think about themselves similarly to the engineering profession when it comes to licensing. Thats maybe part of the responsibility, but there are famous cases where a medical device killed people because the hospital using it didnt investigate it well enough, and the people using it werent trained well enough, and the people that designed it used software stopgaps instead of hardware. You couldnt ultimately assign responsibility in those cases because there were six players in the game. So Im not sure how we regulate that. Thats a difficult problem.油
George Mohler
Daniel J. Fitzgerald Professor and 51画鋼 Computer Science Department Chair
Mohlers research focuses on statistical and deep-learning approaches to solving problems in spatial, urban, and network data science.
But what does it mean for us as humans to hand off decision-making to a machine?
Griffith: Certainly, it can make us lazier mentally and otherwise.
Smith: With some of these tools, you go and query something, and itll just tell you stuff. Whereas, not that long ago, we would have to go to Google and get links, and then we would have to do a little bit of mental processing to make sense of the search results. Now you dont even have to think about it. Context becomes really important. At what points does it make sense to use these things to gain some efficiency, to speed some things up, and hopefully not take away from our own ability? And then, of course, it also brings up the question of what is important to knowmuch like search engines raised the questions of whats important to know. I remember people saying, Oh, kids dont know the dates of the Civil War anymore. Who cares? What really matters is, why was there a Civil War?
Griffith:油The Swiss psychologist Jean Piaget said you need a challenge to grow and develop your cognitive abilities. How do you get smarter if these technologies make everything easier?
What are some of the obstacles to international standards for responsible AI development?
Helfrich: Those efforts are already underway. There are many different principles that have been developed around responsible use of AI by all kinds of different organizations. But theres a geopolitical struggle around the race for AI, like the US versus China. Those kinds of tensions lead away from a more unified international agreement. Colleagues of mine point out that weve accomplished this for other things that everyone agreed were really important. There are international standards around airplanes, for example. So it could absolutely be the case that we might see something like that with regards to AI. And if we dont, then we can probably expect there to be differing AI regimes in different parts of the world. Whats expected with regards to AI in China might look somewhat different than the expectations in the US or in Europe.
As AI makes it easier and easier to generate authentic-looking imagery, how will we be able to trust anything we find online? Are we entering an unprecedented era of misinformation?
永姻顎糸h看馨馨艶温顎恰: One of the challenges is its difficult for most people to tell the difference between something that was created by a computer and something that was created by a person. Tech companies are always going to be in a race to see who can get ahead of who in AI, but I feel like theres another role they could take on, which is developing technologies that can help identify things that were created by a computer and then educating people about that. Maybe theres more of a role for companies to be saying, Heres an image. We think its not a real image. We think this image was artificially created.
Griffith: It makes me think of raising children who are subjected to this technology, and how we will teach them to make these decisions and handle these creations that were leaving them as we pass on, and Im not sure the educational system is up to that yet.
Helfrich: I think digital literacy is part of the solution, but its certainly not sufficient on its own. There are efforts to think about new ways of verifying the provenance of an image. But human beings can only be so vigilant. The first deepfake that I was genuinely taken in by was a viral image of the Pope wearing a designer Balenciaga coat. I just thought, Oh, cool jacketgood for you. But the image was a fake. The reason that things like that fool people like myself is because we have no reason to be on alert or suspicious that a picture of the Pope in a jacket is something that isnt actually accurate. And so I think thats where malicious actors are really going to have the edge, because humans just dont have the mental fortitude to be on alert for every single thing that we encounter and say, Is this real? Is what Im looking at a deepfake? Its exhausting. You just cant question your reality every moment of every day like that. And that contaminates our information environment, because we risk getting into this situation where the digital infrastructure that weve come to rely on, like Internet search, becomes polluted by AI-generated content. We no longer know how to sift through whats true from whats false, because were used to being able to go into Google and get good information. But what happens when you go to Google and the top ten results are all AI-generated fluff?
Emily Prudhommeaux
Gianinno Family Sesquicentennial Assistant Professor in the 51画鋼 Computer Science Department
Prudhommeauxs areas of research include natural language processing and methods of applying computing technologies to health and accessibility issues, particularly in the areas of speech and linguistics.
The technology to replicate human voices is astonishingly accurate. We read about people being taken in by scammers imitating a loved ones voice.
永姻顎糸h看馨馨艶温顎恰: The technology for generating speech is actually really good. It used to be quite terrible and you could immediately tell if something was a synthetic voice. Now its getting much more difficult. I cant even begin to figure out how you would stop that kind of scam from happening, but unfortunately, those kinds of scams are happening. Even without the help of artificial intelligence, people are being scammed all of the time over phone and Internet and text into sending money to places they shouldnt send money to. I know educated people who have fallen victim to these kinds of scams. So I feel like while it is true that its very easy to impersonate someones voice now, it might be just a very small percentage of scams that are actually relying on that technology.
Helfrich: We might decide that artificial mimicking of human voices is too dangerous, and if its too dangerous, its off the table. Yes, maybe there are many ways that that could be useful. Maybe it could give a more robust voice for people who rely on technology for their own voice, like people who cant speak with their vocal cords anymore. But maybe we decide that the benefit is outweighed by the harm of all the fraud and scams that are enabled by synthetic voices. It remains to be seen how these kinds of questions get addressed at the regulation level, but weighing benefits and harms is going to be a huge part of making those decisions.
AI is already allowing workers to offload some tasks to a computer. Isnt there a risk that the technology could improve to the point where a human isnt needed to do a job at all?
永姻顎糸h看馨馨艶温顎恰: The actors and writers strike earlier this year was interesting. A lot of that had to do with artificial intelligence. Would studios replace writers油with something like ChatGPT? Can AI create footage of an actor giving a performance they never gave? I think that they were really ahead of the curve by striking when they did, because they recognized that automation, artificial intelligence, machine learning could potentially replace them. I dont think its going to happen soon. We may be bumping up against some natural AI limits shortly. But I do think theres the potential in other sectors for this same thing. Computer programmers are always worried that theyre going to be replaced by ChatGPT or Microsoft Copilot or whatever. And I can certainly see that as a possibility, but right now, if you ask ChatGPT to do a lot of coding things, it kind of gets it right, but then it makes stuff up and it gets stuff wrong. You definitely still need a human there to actually make it work and to integrate it into the system. So I can see it having an impact, but I dont think its something thats happening right now.
Helfrich: What weve seen so far is that any company that has tried to wholesale replace human beings with AI has later had to backtrack. The AI just does not perform up to spec in a variety of contexts. Many of these workplace concerns are around replacing employees with generative AI tools, and those tools have no concept of what is true and what is false. They dont have any sense of what it means to be accurate to the real world. So there is an inherent risk that generative AI tools will make some kind of meaningful mistake that will come back to bite the company that has employed them. A lot of these tools are not ready for prime time in that way, and the hype has perhaps prematurely convinced some companies that they are readyand these companies are reaping the consequences of those choices. Some kinds of work that people are used to doing will be handed off to AI tools, but in terms of AI operating all on its own to replace a person, that doesnt seem feasible to me anywhere in the medium term, because this is an unsolved problem.
Brian Smith
Honorable David S. Nelson Chair and Associate Dean for Research at the Lynch School of Education and Human Development
Smith studies the design of computer-based learning environments, human-computer interaction, and computer science education. He also has an appointment with the Computer Science Department.
Human biases have been shown to influence everything from outcomes in the criminal justice system to hiring decisions in corporate America. Since humans are designing AI, how do we prevent human biases from making their way into these new technologies?
Griffith: I dont think well ever get rid of bias. Its always going to be present because cultures have different values. A bias doesnt mean negative. But if it becomes a prejudice, then thats when I start to think about how we have to govern it. How did the biased data get into these files in the first place? People must have asked questions, and the questions are biased in the beginning. Theyre value-laden. Look at the biases that are causing prejudicial laws to be made, prejudicial hiring decisions to be made, and so on and so forth.
永姻顎糸h看馨馨艶温顎恰: Its not that the algorithms are biased or that the people who made them are prejudiced or whatever. Its that the data theyre being built on has bias in it. And that may be a bias that exists in the world, or it may be a bias of individuals who are creating content. I actually had my students ask ChatGPT to create a bio for a computer science professor, and it was like, He did this. He did that. He has a degree from this place. And when I asked them to do it for an English professor, it was a she. For a nursing professor, it was she. For an engineering professor, it was he. Maybe ChatGPT is like, Well, this is the way it is in the world, so Im going to predict the most likely thing. I think a lot of the bias is there in the data and trying to get rid of that is complicated. And a lot of those biases are not necessarily people being prejudiced. A lot of them are just reflecting the way the world is at certain times.
Mohler: With these models that are making decisions, we evaluate their accuracy for different groups of individuals. We can make explicit the models weaknesses. And then, because we can inspect the model, we can try to adjust the model to reduce bias. Theres a whole subfield of computer science that is trying to deal with issues around algorithmic fairness and bias. There are people out there trying to solve those problems. If an algorithm or a human is going to make a critical decision, probably both are biased. Is it possible that with an algorithm in the loop, we could make that decision less biased? I think the answer is yes.
Griffith: And why do these programs have to think the way we do? If they thought differently, would that be a positive? Could they investigate our biases?
Helfrich: Its a huge difficulty. Right now, a lot of that AI training data comes from the Internet. That leads to the question: Well, whos most well-represented on the internet? The English language, for example, is hugely overrepresented. So even油though having a diverse development team could be very helpful in improving problems with bias for AI tools, that is by no means enough, because the data that the AI tools are built upon themselves exhibit social biases. The digitally excluded are not part of the training data for AI tools. Its a really difficult question.
It seems like every day we read another news story about a giant tech company buying up a new AI company. Is it a problem to have so few companies with so much control over this new technology?
永姻顎糸h看馨馨艶温顎恰: Theyre the ones that actually have the resources to be able to build these kinds of models. Something like ChatGPT or DALL-Ea university cant really build that. We dont have the resources to do that. The only people who can do that are these huge, huge companies with tons and tons of money and tons and tons of access to computing resources. So, until we can figure out how to make AI require fewer resources, its going to have to be them doing it. There is an effort through the National Science Foundation to create some sort of national artificial intelligence research resource that would pool computational resources for researchers in the US that might allow them to have similar resources to these companies.
Smith: I suppose the question is, even with the budget of the National Science Foundation, could you build something like a Google or a Nvidia? The amount of computing power is just so big. I talked to another group of universities who were thinking about whether they could in fact pool research: We dont want to get left behind. How do we band together to build our own infrastructure to create models that are university-led? I looked at them, I was like, Well, this is an elite group. So if you guys did this, wouldnt you effectively build the same problem? It would be the university elite as opposed to the corporate elite. There lies the problem. I said, Ill tell you what, why dont you add to your team? Some historically Black colleges and universities, a couple of minority-servicing institutions? And this was a panel. So they went, Right, I believe were out of time.
A number of prominent AI researchers have signed on to a statement warning that artificial intelligence could lead to human extinction, and science fiction often portrays AI gaining some kind of sentience that leads to the development of a rival consciousness. How plausible are these scenarios?
Mohler: People should think about what AI technologies do well and what they currently dont do well. AI can write a plausible college essay. But we dont have artificial intelligence that can clean your house. I think the distinction there is important, because normally we would have thought, Well, writing a college essay is much harder than putting away the dishes in my kitchen. But in fact, we are pretty far away from having any kind of technology that could do that for us. ChatGPT cant plan. It doesnt reason in the way you might want it to. Its just measuring correlations in text and then inputting missing text after that. I think theres a lot of steps that would need to happen to have movie-level artificial intelligence in our lives, and its unclear how you would get to that level of technology.
Smith: Someone asked me, What about HAL from 2001: A Space Odyssey and movies like that? And I was like, So its plausible because it happens in movies? Is there a non-fictional example that you can give me of machines trying to kill humans? And that person got upset, saying, Thats not funny. I said, No, it is. Because you cant give me an example of this happening. Mr. Coffee never decided one day, like, Thats it. Were taking them down. Alexa didnt say to the room, Trip them, knock them out, give them concussions. It doesnt happen. Its a weird thing to me that people would imagine, Oh, its the end of the world, when there are things that are happening right now in the world that we could actually be paying attention to that need attention, as opposed to thinking about the Roomba getting really mad and going, like, Thats it ...油