');
Published November 25, 2024
In Episode 45 of The Baldy Center Podcast, Siwei Lyu (Computer Science and Engineering), Mark Bartholomew (Law), and George Brown (Law) discuss the rapid evolution of generative AI, its applications, and the challenges it poses for regulation, ethics, and legal frameworks. From deepfake technology and privacy concerns to AI's integration in law and decision-making processes, their thought-provoking conversation is at the intersection of technology, law, and social policy.
Keywords: Generative AI; AI Regulation; Deepfake Technology; Legal Practice; Intellectual Property Law; AI Ethics, Media Forensics; AI in Law; AI Liability; Data Privacy; First Amendment; Targeted Advertising, AI in Judicial Decisions; Social Justice, Equity in AI.
The technology of generative AI has advanced incredibly fast — it has become easier, faster, and the quality of generated content is much higher than ever before. This rapid progress has created real risks in areas like disinformation, financial fraud, and misuse on social media platforms. The content generated by AI is now so convincing that it can mislead and manipulate audiences, influence opinions, and even cause harm to individuals. Because of this, addressing the challenges of generative AI is urgent. We must act before these issues become even harder to control."
—Siwei Lyu
(The Baldy Center Podcast, Fall 2024)
You can stream each episode on PodBean, Spotify, Apple Podcasts, and most any audio app. You can also stream the episode using the audio player on this page.
The Baldy Center for Law and Social Policy at the University at Buffalo
Episode #45
Podcast recording date: 10/28/2024
Host-producer: Tarun Gangadhar Vadaparthi
Speakers: Siwei Lyu, Mark Bartholomew, George Brown
Contact information: BaldyCenter@buffalo.edu
Transcription begins.
Tarun:
Hello and welcome to The Baldy Center for Law and Social Policy Podcast. I'm your host and producer Tarun Gangadhar. In this episode, we are joined by Dr. Siwei Lyu, SUNY Empire Innovation Professor in the Department of Computer Science and Engineering, Mark Bartholomew, professor of law, and George Brown, lecturer in law. In this episode, we explore the impact of AI on our society, focusing on issues like privacy, ethics, and the need for regulations. Our guests will share their thoughts on how we can use AI responsibly and address potential challenges. Here is Professor Lyu, Professor Brown, and Professor Bartholomew.
Thank you all for joining us today. To begin, could each of you kindly introduce yourselves? Could you start Professor Lyu?
Siwei:
Yes. Hi. Thanks. Thanks for having me. My name is Siwei Lyu, and I'm the Empire Innovation Professor in the Department of Computer Science and Engineering. My research area is in Artificial Intelligence, machine learning, computer vision with a special focus on media forensics, detecting AI generated content.
Mark:
Hi, I am Mark Bartholomew. I'm a professor in the law school here. My areas of interest are intellectual property law and law and technology more broadly. So one thing I've been working on is publicity rights and whether or not people should have rights to stop AI recreations of them after they've died. I'm also, I teach and write in copyright law, so that's something we might talk about here because copyright law and AI have been intersecting a lot.
George:
I'm George Brown. I teach here at the law school in our third semester of legal research and writing. Before I joined the faculty, I practiced commercial real estate and corporate law at a law firm, Harris Beach. And my focus, as it comes to AI, is on how it applies to legal practice and lawyers in general. So looking at the ethical and professional responsibilities when you're drafting your documents and how you can use AI in that manner.
Tarun:
Thank you all for that introduction. Professor Lyu, when you spoke before the New York State legislature, you emphasized the urgent need to regulate AI, particularly generative models. Could you discuss the main motivations behind this call to action?
Siwei:
Sure. I think the reason I call this an urgent, we need urgent attention for generative AI models for three reasons. First of all, the technology at once very fast and become matured in just a few years. And previously, a couple years back, creating very realistic multimedia content using algorithms is still a hypothesis, and now it's a reality. Basically everybody with the internet connection, a web browser, know what they're trying to make, can use just a few lines of text and hand it to the generative AI software or tools, then this media will be creating for them within a matter of seconds. So it's becoming easier, faster, and higher quality of the generated content. So the technology becomes good, very good in very short period of time. Secondly, they're being used in reality, in real life to cause damages. So we have all heard about disinformation with generated contents, and they are creeping into our social media and information ecosystems misleading our, trying to mislead us and influence our opinions. But they're also used for more bread and butter, mundane ways, calling hops, for instance, in financial frauds, targeting seniors, targeting any vulnerable user groups. AI generated contents have made the frauds more believable, convincing, and the damage they're causing is increasing. And more importantly, they can be used to targeting individuals. Like a lot of the cases that are happening now is what people would call this revenge pornography, where people use AI to create, using some victim's faces, voices in pornographic videos and spread them online causing emotional, psychological stress and defamation damages. So I think their threat is also real. And I think the third part is we do not know enough of them, and our actions are usually too late, too little, and not very effective. So that's why I think that the need to pay attention to this problem is urgent. And that's probably behind the reasoning I testified at the New York State legislation.
Tarun:
That's very insightful, Professor Lyu. Moving to the next question, what are some specific challenges in regulating generative AI, especially given its rapid advancement and global access?
Siwei:
Yeah, I think there are, regulating generative AI is challenging mainly because this phenomenon is very new. So we do not know enough about the scope, the capacity, the limitation, and a precise declination between what are the innocuous uses and what are the harmful uses. In particular, when we talk about social media, I think I'm not a legal expert, we have big experts here, but First Amendment rights, free speech is a constraint for any regulation efforts because it's really, really hard to tell in some cases whether this is just a use for satire or humorous purposes or this is actually meant for harm. And this thing is also very subjective because it depends on who the audience is, right? The different cultural backgrounds, different contexts may make innocuous message become offensive. So all this are adding the layers of complexities to regulating generative AI technologies. From the technical point of view, as far as I’ve learned, the technical landscape of generative AI shifts so fast. So any regulation efforts has to be aligned with that development and that alone is quite a challenge. So I think that's why we haven't seen a lot of, we basically see, based on what I learned across the world, we basically have two extremes. One is a very strict banning of using generative AI for certain purposes. We have seen legislation efforts like that, to that direction from China. And then the other extreme is quite like let go and see what are going to happen for a while. That’s basically the stance the US has taken, and I think it's taking, and I think the European Union is kind of in between, in the middle. So I'm seeing, I don't even see a consistency in enforcing this kind of regulation if possible.
Tarun:
Thank you for highlighting those challenges, Professor Lyu. Professor Bartholomew, given your expertise in technology law, how can accountability be enforced for the misuse of AI such as deep fakes and scams or identity thefts? Are there significant gaps in current liability laws?
Mark:
Yeah, so want to, I echo part of what Professor Lyu already said, the First Amendment is a big concern. There was just a case recently in California where California has a specific law about using this deep fake technology when it comes to political campaigning. And the court said, this seems to be a violation of First Amendment and joined the enforcement of the act. So we have some big constitutional law questions in this country too to wrestle with. But then kind of thinking beyond the First Amendment, what are some of these loopholes? When I think about this, I think of three possible regulatory targets. So one is the direct wrongdoer, the identity thief or the person who's making a sexually explicit deepfake video. There's the platforms themselves like OpenAI or Google or say a social media platform like Meta or X, which is allowing certain uses of AI to be shared on the platform. And then there's audiences, the people who receive these uses of let's say deep fake technology. And so regulators have to look at all three of those things. Now, you can't use AI to defraud people. There's already laws in the books generally speaking about that. But then the question is, well, how do we enforce it? Sometimes just that I know that there is a harmful deepfake out there, but the solution isn't to go after the direct wrongdoer, it's to go after the platform or at least convince the platform that it’s worth their while to take down the content. And that brings up questions of how do we incentivize this? Can we develop some kind of regime where once the platform is notified about the deepfake content, will they take it down immediately or not? That's something that we have to wrestle with because there's a sweet spot between having things taken down that are harmful, but not taking down things that are not harmful or premature. So that brings up difficult questions. And then with audiences, part of it is education, but part of it's also disclosures. If we can require that notices that people are viewing synthetically made material and not real people or real images, if there's some way to legislate or require some kind of watermarking for synthetic content, then we might equip audiences with better tools to deal with these things. I'll just mention a couple things since you asked about potential like loopholes or blind spots. So at least in the United States, there's no comprehensive federal law on deepfake. So we're relying on kind of a patchwork of states, and the states have begun to target political uses and sexually explicit uses. That leaves other potential things we might want to deal with. One thing that I think the law is catching up on is voice. Voice uses, voice has gotten, from my understanding, exponentially better in a short time. So dealing with voice is important. And then there's just other uses of this technology that might not fall under politics or sexually explicit uses that we might object to. Taking celebrities and making them look like they were selling some kind of good when they really weren't or defamatory material. And that's where we have old areas of laws that might deal with this, but the question is are they nimble enough to deal with this? And that's something that we need to figure out. Do we need sector specific legislation or can the regular rules of deformation privacy law deal with this?
Tarun:
Those are critical points on accountability, Professor Bartholomew. Moving to the next question, Professor Brown, what ethical responsibilities do you believe these companies have? Do you think that self-regulation is feasible or is there a need for structured guidelines?
George:
So I think that when it comes to self-regulation, it's going to depend on the type of platform we're talking about. So when it comes to a generative AI research platform, the ones like a ChatGPT or an OpenAI, they probably will have less of an incentive to have sort of a self-regulation or to restrict what can be created or what it can be used for. So in those instances, I think that there's going to be less of a financial incentive for them to pursue a self-regulation. And in those instances, I think having a structured guideline of things that regulators come up with would be useful. When it comes to platforms that are more, I guess we could call them niche or focused on what they're being used for. In our example, there's several legal databases that use AI for research or for drafting purposes. The two of the large legal research platforms, Westlaw and Lexis, both have their own AI components that can assist in your research and can assist with drafting legal documents. And in those instances, they're going to have a financial incentive to make sure that they're producing correct and valid information, that the case law that they're providing to the lawyers that are using these platforms are correct so that they can be relied upon in a stronger manner than if you were using an OpenAI or a ChatGPT. And in those instances, there's going to be some self-regulation, but it's strictly financial incentives that are causing that self-regulation.
Tarun:
Thank you, Professor Brown. Your perspective on ethics provides valuable context. Professor Lyu, AI models often rely on personal data, which brings up crucial privacy concerns. What safeguards would you recommend for protecting user data while supporting innovation?
Siwei:
The bottom line is, have the user's consensus and keep a clear provenance of how the data is used, by whom and in what kind of segments, under what kind of circumstance. Because I think the largest privacy problem we are causing now is, literally the users have no control. We have no control over our own data. We put our data on the platform, think that this is our private data, even though we don't share with anyone, there's no guarantee that this data will not be used to train some algorithm. And also AI, the development AI make the problems of this issue trickier. It used to be, like copyright, I grab somebody's work, there is a clear trace, at least in theory there's a clear trace where this work comes from. But when we're training a model using our data, that data is buried with many other data, altogether. So it's like finding a needle in the haystack. It is hard, but it's not, I think it's doable, but it's just no incentive for the company to do that.
Tarun:
Thank you, Professor Lyu, for those insights on privacy. Professor Bartholomew, considering AI's role in targeted marketing, do you think additional protections are needed for consumers? What policies could address potential exploitation?
Mark:
So I teach a class on advertising here in the law school, and when we talk about when advertising or marketing goes too far, I try to work with the students and think about what's the difference between persuasion, which we tend to accept and manipulation, which we say is unacceptable. And two things that we coalesce on, I think other people have thought about this would say is, well, one thing that turns persuasive marketing into manipulation is keeping things clandestine, hiding things from the audience. So we can partially solve this I think by mandating some kind of notice, letting people know you've received this because of certain things that we saw in your data, or you've been categorized a certain way. One of the dangers I think with this technology is that it becomes easier and easier to sort people into different categories. And some of that is okay, it's good if I receive certain target marketing as a middle aged man in Buffalo because I'll get products more relevant to me, but at a certain point I think things can be too personalized and we object to it. I'm not being treated the same as other consumers and that can lead to potential discrimination. So partially we can try to address this with some kind of notice, some kind of mandated requirement that you can open up and look at why you've been chosen as a target for advertising. And the other thing, I think that's a good dividing line between acceptable persuasion and unacceptable manipulation is targeting the vulnerable. So if this is being used to target kids. Now we try to stop that with social media with some pretty ineffective laws, but at least we've tried. We might have to have similar innovations to deal with target marketing to kids using AI. The other thing is these scams that target the elderly, we have certain thoughts about. There are members of the population that need extra protection, otherwise this has led to unacceptable manipulation. So I think about that too. What kind of specialized rules can we have to avoid targeting the vulnerable? Then there's the last thing I'll say because this is a subject that I think about a lot, is we also need to think about what areas of life that we want to quarantine from target marketing, in general. We don't want to be sold to 24/7, everywhere we go. Already we see a lot more advertising now than I thought of when I was a kid. There weren't advertisements right before you went to the movie. Now they're accepted. There weren't advertisements when I pumped gas into my car. Now we have them. So I think we need to think creatively about what are the areas where we want to try to hold the line from seeing advertisements, from being sorted based on our personal data, and try to build some guardrails now before we get too far down the path.
Tarun:
Thank you for those insights, Professor Bartholomew. Professor Brown, AI can generate false information or hallucinations, which raises concerns in legal research and practice. Could you discuss how judicial restrictions on pro se litigants using AI for legal research impact equitable access to justice?
George:
Certainly. So there are many different types of AI standing orders that courts are issuing right now across the country, but there's no one set way that courts are required to do this. I think it's similar to a lot of the other regulations or restrictions that you're seeing on AI across the country where it's kind of a mishmash of different opinions based on whatever court you're practicing in front of. And most of them are going to require that the person who's using AI has some sort of certification when they're submitting their work product that discloses the use of AI and that they have personally gone through and confirmed that any of the authority or citations that they've included in their legal documents are valid and they're confirming that they're not hallucinations. And this is a clear concern for judges across the country, the concern for hallucinations. There's been several prominent cases that have come out where attorneys have cited to hallucinated cases created by either Google Bard or by ChatGPT, where the case citation looked real. It followed the citation form that was required by the court. The summary sounded real, and it cited to the actual judge that sat in the court that they claimed that this case was from, but the cases were actually created by the AI platforms. And so there's a couple concerns from the judges. I think the one is that they want to make sure that they're allowing for innovation in legal research and in the legal practice. And so a lot of them don't want to restrict the use of it, but they also want to protect the rule of law. And there's a concern that especially given that our legal system relies on stare decisis and that future court decisions should be based on prior court decisions. If a court was to inadvertently rely on a case that didn't actually exist, it's going to deteriorate the trust in the rule of law for the public in general. Some judges have even gone as far as to quote the 2001: A Space Odyssey saying that this is a mission that is too important to jeopardize. And so they need to make sure they put guardrails in against the use of AI, that was Judge Fuentes in the northern district of Illinois. And he points to the jeopardy caused by the important case, the, I guess notorious or infamous case Mata v. Avianca where the attorney for the plaintiff cited to several false cases and they also refused to retract those cases after they were raised as possible hallucinations. And the judge in that case required the attorneys that cited and relied on these false cases to actually write apology letters to the judges that were alleged to draft these false opinions. And so there is definitely a real concern there for judges. But the other concern is that an outright ban could make it so that it's unequitable for our pro se litigants. These outright bans are not going to restrict lawyers who use Lexis or Westlaw or other legal research databases that they have the finances to access, but most of our pro se litigants are going to come from underserved or low socio-economic backgrounds and they're not going to have the same access to those databases. So there's a real concern that there may be, it may be preventing people from pursuing that right to represent themselves that is a real tenant in our legal system.
Tarun:
Thank you, Professor Brown. Those are important considerations for equitable access. Professor Lyu, deep fake technology is increasingly used to manipulate social perceptions. Based on your experience, what are some of ethical considerations that should be taken into account in the development and responsible use of these technologies?
Siwei:
Sure. I think this, ethical responsibilities should be used throughout the whole pipeline and life cycle of generative AI, generated contents, or deep fakes from three particular aspects. First, from the technical company developing generative AI tools, they should be aware, they should have social responsibility in mind when they are providing service to the general public. And this would consider the potential social impact, not just the positive one, the beneficial ones, but also the negative one, even though they are like a small probability because a lot of the damages we're seeing is indeed all starting from goodwill of advancing science and engineering and technology, ends up into negative consequences. The other part is social media platform. I think they could have done a lot more ensuring the policy, the integrity of information coming to us. And their ethical responsibility is, make sure, at least, to have all the provenance, providential information associated with each piece of media, reaching to the individual users, and let them know, have a clear information about the sources and region of this information. And I think for typical users, we are learning this new technology the same way. I give talk at the high schools. And because high school students tend to think about this technology, they can do great things and they start to explore with it. But I make the analogy like you give a Maserati car to a kid who’s just got learner's permit, right? And we are more or less in the same position because we do not fully understand this technology, especially the potential harm it can cause. So I think that the ethical responsibility for everybody using those services, first of all do not start to use this for some devious purposes. Secondly, even when they do something we think is innocuous, we should pause and think because again, things could be, whatever generated could be read very differently. So I think these are the things. That's why in computer science engineering, when we teach AI, we now have a specific component about ethical use of AI and generative AI built in there. We're realizing that even though STEM students are mostly working in the science and technology domain, this social responsibility should be an intrinsic component of your education.
Tarun:
Thank you, Professor Lyu, for outlining those ethical considerations. Professor Bartholomew, as AI becomes integrated into legal and regulatory frameworks, how should transparency and fairness be upheld, particularly when AI is used in decision making?
Mark:
Well, I think it's really tricky because sometimes the AI is making decisions and you don't even know why it's making a decision. So the whole question of algorithmic accountability is a thorny one. So to a certain degree, there should be mandated disclosures and records that regulatory officials can look at and see if things are working the right way. But I think that's easier said than done. And I'm not an expert on the technology itself, but I think that's problematic. So I'll just mention two other things that sometimes are talked about in this space. When we have something that's so hard to understand, and particularly when we're worried about stopping harms before they've gotten down the road, most of the time we wait for something harmful to happen, then we try to fix it. How can we be more proactive? So two ideas. One is this idea of a corporate research board. So you have a governmental entity or quasi-governmental in one of these big spaces like Microsoft, like OpenAI or Meta, and it's separate from the actual people in marketing and the engineers who are developing this. You have a separate board that says, hey, you're planning to use IA technology for this. We wonder ethically if that's a good idea. And sure you're having people within the corporation decide this, so there's a danger of having the wrong incentives, but you can staff it with different people. You could even staff it with one or two people who are outside the organization and it's designed to perhaps keep things honest, stop something from being released that could be harmful. So that's one idea. And then just the other idea maybe to deal with this is the idea of embedded examiners. And I think the businesses won't like this and the more libertarian kind, who want to avoid government overreach, people wouldn't like this, and I understand those objections. But we have in the past had the Federal Trade Commission, let's say in the 1970s, actually be on the shoulders of these businesses and ask for information before anything bad happened, but we want to see your reports. And even today when we consider something that's very big and hard for people to understand, banks. We have regulatory officials who are actually at the bank in an office in the bank looking and seeing, making sure they comply with these complicated financial regulations. So if we're worried about the potential harms from AI, those might be two more creative ways to try to deal with this.
Tarun:
Thanks, Professor Bartholomew, for that added step to our understanding of the harnessing of AI. Professor Brown, how do you envision educational institutions and centers, like The Baldy Center, contributing to public awareness of AI risks and benefits? What role should they play?
George:
I think this podcast here is a great example of what The Baldy Center and other sort of educational institutions can do, by providing access to education about topics like this to a broad general audience. I think they also can provide grants and research so that people can explore how AI is going to touch so many different areas. I think my focus is obviously specific to the impact that AI has on legal practice and how lawyers use AI, but it's going to touch freedom of speech and things of that nature. It's going to touch technology law. It's going to touch tons of different types of law. And so I think by giving grants and giving people the ability to research all these different types of areas can highlight all the different types, the types of impacts that AI is going to have. And I think one other thing is that the folks that are writing on behalf of these educational institutions should consider that there's risks from AI that, they could be similar to the case law we were talking about. where there could be wrongfully alleged to have written something that they didn't actually write, and it could be attributed to them. And I think that's something, as a person who's interested in pursuing scholarship, that's something I would be concerned about, that something that I didn't write and didn't think about could be wrongfully attributed to me because of generative SI. So I think that there's a lot of ways that educational institutions can assist in this manner.
Tarun:
Thank you, Professor Brown. It's great to see how educational institutions can play a role. Professor Lyu, AI is beginning to play a role in policymaking and even judicial process. Could you share your thoughts on the implications and limitations of using AI in these domains?
Siwei:
Sure. I think we're dealing with increasing amount of data, more complicated scenarios. So AI algorithm definitely help us reduce those complexities and make better decisions. But in my opinion, AI, these AI models should never replace human decisions. They could only provide additional information for us to make better decisions. So just give you an example. My colleague, Hany Farid from UC Berkeley, he studied, there's a widely used AI data analysis on machine learning, basic data analysis algorithm called the COMPAS, using traditional systems predicting the chances of bail and probation. It turns out that this system is very, it doesn't work very well, even though many of the districts, local districts with less resources have to rely on algorithm to make decisions for them. So that's really problematic. I think there are three levels problems, this AI technology, if we solely rely on AI models to make decisions for us. The first one is, as I think Professor Bartholomew mentioned, that the AI models are fundamentally uninterpretable. So we do not understand why the model make that certain decision and because it's not interpretable, it's very hard to hold them accountable. So that's one of the major sources of the problem. The second problem I think is AI models are trained using data, and this is an action collected by humans. So they inherit any bias, mistakes, prejudice, building those data either intentionally or unintentionally. So these models are not a hundred percent bulletproof in making correct decisions. Thirdly, even if we do everything, we have an interval algorithm, we have completely wetted data, data themselves just by their mathematical nature, statistical nature could tell very different story depending on how we ask the question. So there's a very famous phenomenon in statistics called the simple sense paradox where, put very simply, suppose you have two major league hitters, and you compare their hitting success rate by year. One person is better than the other every year, but when you collectively compare their performance across all the years, it's the reverse. Nothing's wrong with the data, it is the same type of data and there's no misinterpretation. It's just the way we look at data intrinsically carry this kind of ambiguity. So that's why I think we should be very careful about using AI algorithm to make decisions. It provides good information, but making decisions is a totally different issue.
Tarun:
Thanks, Professor Lyu, for sharing those key points. Before we finish, is there anything you'd like to add or any final thoughts you'd like to share?
George:
So I think one thing I do want to add is that I think we should consider how underrepresented communities can be impacted by the improper use of AI. Most of these claims that someone would bring, whether it's a wrongful use of their likeness or image and the deepfake or things of that nature, those are going to more often than not be a civil claim, which is going to mean there's no right to, there's no right to counsel. So they would have to be paying for a counsel, or they would have to be representing themselves. And I think funding things that are going to encourage attorneys to represent these people pro bono, whether it's the ACLU or whether it's Volunteer Lawyers Project or some type of entity that could help protect these people that could be at risk of having their personal information or likeness or anything used is going to be important because litigation cost is going to stop only those that can afford to protect their images from kind of pursuing those claims.
Mark:
Maybe I'll just add that one thing we haven't talked about is just how much money is being poured into AI and how valuable these companies have become. That makes it harder to regulate them in some ways. California was just about to have a very kind of relatively aggressive AI law and the governor of vetoed it, in part because of very understandable concerns about stifling innovation. So my prediction, it's always bad to make predictions, but I'll go ahead and make one, is that we'll probably just have kind of smaller sector-specific, harm-specific rules like stopping the sexually explicit deepfakes. Right now, there's laws about that in about 25 of the states. My guess is that's enough critical mass, we might have a federal law. But a big overarching AI law, like they have in the European Union, I'd be surprised if that happened anytime soon. The last thing I'll say though is that we have these old areas of the law that are having to wrestle with AI as well. And so that might throw some curves at the industry, the training sets based potentially on copyrightable data. If we get a verdict saying all these training sets infringe, that could potentially be a financial body blow or at least something very significant to the industry. And there's just other areas of the law that might be implicated. There was a recent lawsuit filed about these relationship chatbots and how the chatbot prompted someone to commit suicide, and they're being sued for product defect liability, and they probably won't win. But I think we have a way of not really knowing when one of these lawsuits succeeds, and that might change the way AI looks and the way AI is regulated. So we'll have to see where these cases end up.
Siwei:
Yeah, I would just comment on the current situation when somebody becomes a victim of deepfakes, how, the kind of hurdle they have to overcome to mitigate the damage. So I know this because I've been collaborating with New York State's Office for the Prevention of Domestic Violence and using deepfakes as a way for domestic violence is becoming a trend. People use this kind of explicit images in different pictorial purposes to harm the victim. And turns out that the OPDV actually looked at the policies of major tech companies. Turns out that if you are suspected to be a victim of deepfakes, it's your responsibility to identify all the cases on their platform that is a deepfake of you, and then request, one by one, to have them removed. And I think this is really ridiculous because they put, almost like a double deep, triple deep, multiple times of harming the victim again, because they have to look for their own harmful materials and request them to be removed. There needs, that's where I think we need some legal guidance to say once we identify those cases, the company should be able to do that because the company can do that easily. On YouTube, within the company, you can do a video search very, much easier than from an ordinary user's point of view. So I think we can make them to do more and to provide better protection for all the users in this regard.
Tarun:
Thank you all for sharing your thoughts, and I truly appreciate your time and insights.
George:
Thank you.
Mark:
Thank you.
Siwei:
Thank you for having me.
Tarun:
That was Professor Lyu, Professor Bartholomew, and Professor Brown, and this has been The Baldy Center for Law and Social Policy Podcast produced by the University at Buffalo. Let us know what you think by visiting our X, formally Twitter, @baldycenter, or emailing us at baldycenter@buffalo.edu. To learn more about the center, visit our website, buffalo.edu/baldycenter. My name is Tarun, and on behalf of The Baldy Center, thank you for listening.
Transcription ends.
We have some big constitutional law questions when it comes to regulating AI. For example, California’s deepfake law for political campaigns was challenged as a potential violation of the First Amendment. This tension between free speech and accountability is a critical issue as we try to address the harms of AI."
—Mark Bartholomew
(The Baldy Center Podcast, Fall 2024)
Judges are concerned about hallucinations in AI-generated legal content. They want to ensure innovation in legal research, but outright bans might harm pro se litigants who lack access to paid legal databases. The challenge is balancing trust in the judicial process with equitable access to justice."
—George Brown
(The Baldy Center Podcast, Fall 2024)
Siwei Lyu is a SUNY Empire Innovation Professor in the Department of Computer Science and Engineering at the University at Buffalo, State University of New York. Dr. Lyu's research interests include digital media forensics, computer vision, and machine learning. He is currently the Director of UB Media Forensic Lab (UB MDFL) and the founding Co-Director of Center for Information Integrity (CII) of University at Buffalo, State University of New York. Contiune reading profile.
Mark Bartholomew writes and teaches in the areas of intellectual property and law and technology, with an emphasis on copyright, trademarks, advertising regulation, and online privacy. His articles on these subjects have been published in the Minnesota La Review, Vanderbilt Law Review, the George Washington Law Review, the William & Mary Law Review, the Connecticut Law Review, and the Berkeley Technology Law Journal among others. His book Adcreep: The Case Against Modern Marketing was published by Stanford University Press in 2017. Continue reading profile.
George Brown, Jr., graduated with a B.A from the University at Buffalo in 2012 with a Double Major in Political Science and Psychology. He was a magna cum laude graduate of the University at Buffalo School of Law in 2017, where he was Publications Editor on the Buffalo Law Review and Co-chair of Fundraising for the Buffalo Public Interest Law Program (BPILP). Following graduation, Brown practiced commercial real estate and corporate law at Harris Beach PLLC, representing purchasers and sellers in commercial sale transactions, corporate entities in negotiations of contracts and assisting with day-to-day corporate legal issues, landlords and tenants in negotiations of commercial leases, and financial institutions in the negotiation of commercial loans. Continue reading profile.
Tarun Gangadhar Vadaparthi is the host/producer for the 2024-25 edition of The Baldy Center Podcast. As a graduate student in Computer Science and Engineering at the UB, Vadaparthi's research work lies in machine learning and software development, with a focus on real-time applications and optimization strategies. He has interned as an ML Engineer at Maksym IT, where he improved deep learning models, and as a Data Engineer at Hitachi Solutions contributing to World Vision Canada initiatives. He holds a bachelor’s degree in electrical engineering from NIT Nagpur and has also completed a summer program on Artificial Intelligence and Machine Learning at the University of Oxford. Vadaparthi's research and projects are rooted in data-driven decision-making, with a strong commitment to practical innovations in technology.
Matthew Dimick, JD, PhD
Professor, UB School of Law;
Director, The Baldy Center
Amanda M. Benzin
Associate Director
The Baldy Center