Standing at the Next Legal Frontier

A roundtable discussion about how artificial intelligence is pushing the boundaries of the law

by Annie Breen

On first examination, the cowboys and pioneers who established and personified the American West have nothing in common with the modern-day tech lawyers who populate today’s courtrooms. But delve a little deeper and the similarities start to appear, especially in the context of artificial intelligence and its rapid evolution in the American legal system.

Image
san francisco city hall

New borders are constantly being drawn and redrawn as new tech is invented, and the Wild West of tech legislation is still in its infancy. To provide some perspective on this burgeoning area of the law, we gathered four members of the USF Law community who either teach or practice at its forefront. Guided by moderator Professor Tiffany Li, Josh de Larios Heiman ’05, Jordan Jaffe ’07, and Professor Michele Neitz discuss the future of the nascent legal domain.

Roundtable moderator: Tiffany Li, Professor of Law

Participants:

  • Michele Neitz, Professor of Law and Founder, USF Center for Law, Tech, and Social Good
  • Joshua de Larios Heiman ’05, Managing Director of Data Law
  • Jordan Jaffe ’07, Partner at Wilson Sonsini Goodrich & Rosati.

Tiffany Li: Thanks so much for being here and participating in a discussion about how the law is dealing with, and how it should be dealing with, artificial intelligence. To provide some background, can you describe your current role and how you moved from law school to this point in your career?

Michele Neitz: I’ve been a law professor in San Francisco since 2006, and I’ve seen the rise (and sometimes the fall) of various technical industries. My research focus has always been on the ethical uses of power, whether that is power vested in judges, corporate executives, or tech leaders. Teaching emerging technology at USF Law is such a fun chapter in my career, and I think our students are ready to be on the cutting edge of these legal fields. For example, I often ask students to imagine what the law should be in cases where we don’t yet know what it is, and I am consistently impressed by their answers.

Joshua de Larios Heiman: I’m the managing attorney of Data Law Firm, and I’m proof that there are many different paths to a legal career. I started law school because I wanted to advance in a securities career that I began after graduating from UC Berkeley as a history major, and I knew I’d need either a JD or an MBA to do that. When I was in law school, I wrote a paper on elvish gold (World of Warcraft fans will know what I’m talking about), and that caught the eye of a hedge fund manager, who ended up giving me a job. I didn’t pass the bar the first time, but I did a ton of networking — happy hours, bar event volunteering — and I got jobs by knowing people (once I passed the bar). In short, there’s no one way to launch a career ... but I will say, being really nice is what got me and kept me on track. The legal field has a long memory, and people remember who treated them kindly.

Jordan Jaffe: I practice intellectual property litigation as a partner at Wilson Sonsini Goodrich & Rosati, and prior to that I was a partner at Quinn Emanuel, where I began my legal career after I graduated from USF Law in 2007. I was a computer science major in college and became interested in IP rights at that time. I actually wrote my senior undergraduate thesis on the topic. So I knew in law school that I wanted to focus on technology and the law, and I’m still practicing in that area today. The reason I’ve been in this area my whole career is because I’m constantly learning new things and finding new challenges. I’ve also been lucky enough to work with many clients on emerging technologies. One focus for the past several years has been AI. It’s something I’ve been interested in for several years, so it has been gratifying to see those issues I was looking at come to the forefront.

Li: What slices of AI law do you think are most important for lawmakers to be focusing on right now?

de Larios Heiman: There are so many nuanced pieces to this issue — one important element is that lawmakers should not be technologically specific when drafting legislation. That said, one of the first areas I think needs to be looked at is testing repositories — what’s the remedy, who owns the rights to the data being stored, should it be mandated that the tech that generated the data be destroyed once the case/cases using it are finished?

Neitz: Of course I will answer that I think we need to focus on AI for social good! Areas of priority should be LLMs and unintended consequences. What are my rights as a creator if LLMs utilize my work without attribution? What if I am adversely affected by a decision — such as a loan denial — that is the result of an algorithmic bias? Given the lack of action in Congress, I suspect many of these questions will be answered in courts.

Jaffe: Transparency and ensuring that vulnerable populations are protected from discrimination is a good place to start. Generative AI can reflect back biases in training data if proper safeguards are not in place. We want AI to help eliminate biases, rather than perpetuate them.

Li: Should there be federal AI law(s)? If so, what should they be?

Neitz: First, we need a defined federal term and clear definitions of what “AI” is, so that all states are starting with the same legal framework. I also think there should be a national push to educate both the young and the elderly about how tech works, about how AI works within the technology sector, and about how to use tech safely. {Note: The ones with the least knowledge will always be the ones most vulnerable.}

de Larios Heiman: I don’t see a federal law happening anytime soon. I think we should focus on rights in this context. If we focus on defining the rights, we should be able to connect the existing regulations to what’s coming. Whatever legislation happens, I want to see teeth — whether it’s criminal, civil, or both, the legislation must be enforced.

Jaffe: I agree that general federal AI legislation isn’t likely to happen soon. But I think that the recent AI Act in the European Union, which classifies different uses of AI into risk categories and regulates their use, will affect the big companies and how they operate, and that will necessarily affect AI legislation here.

Li: That sets us up nicely for the next question: How do you advise clients on issues where the law is unwritten/being written/constantly changing?

Jaffe: I like to establish where the client falls first. What’s their business model, what kind of money do they have, what’s their risk tolerance? Then I provide them different options that run from low-to-medium-to-high risk. My role as their attorney is to give the big picture and provide the current landscape so they can make an informed decision.

de Larios Heiman: I notice that regulators tend to be more understanding when they see a history of a client trying to comply (and not just meet the bare minimum of a standard), and I advise clients thusly. You have to discuss the fact that it gets dicier as you get closer to the regulatory margins. At the same time, it’s best to not be the lowest hanging fruit — you don’t have to be the fastest, you just need to be in front of the person who’s in front of the bear.

Neitz: I’m surprised (and heartened) that no one’s said they advise clients to go overseas! Singapore is regulatory-friendly, and France wants to attract AI companies. There’s more clarity around AI in other countries for clients who are risk-averse. But if you’re working here, definitely figure out risk tolerance — are they a self-funded startup, or do they have $100M in VC? Then consider their geography ... for example, some areas of technology law are more advanced in Wyoming than in California.

Li: It seems as though the one throughline in American AI law right now is how rapidly it’s evolving. For law students who are hoping to get into tech/AI law, how would you recommend keeping up with its advancements? What do you personally do to stay abreast of the law?

Neitz: It’s fun but exhausting! I listen to a few podcasts, like Nathaniel Whittemore’s “The AI Daily Brief,” subscribe to some newsletters, like Trey Ditto’s, and I read legal news every day. One thing I’ll never advise for law students is TikTok or YouTube!

de Larios Heiman: I’ll start with the disclaimer that all these recommendations are like milk: They won’t stay fresh for long. That said, I frequent websites like Ars Technica, TechCrunch (hit or miss), ACLU (to see where the next AI nightmare is coming from), and Reddit (often showcases the cutting edge of what’s right and wrong with new AI). And I follow news from the Department of Justice, the Federal Trade Commission, and local attorneys general, to track where their focus is at any given time.

Jaffe: I keep a pretty slim media diet to be more intentional — no social media here. I read a newsletter from Law 360 every day....

de Larios Heiman: I’ll give a plus-one to Law 360.

Jaffe: I also read Techmeme; It’s a tech news aggregator. And then my firm sends out many alerts each day. For AI specific, I also recently started reading Chat GPT Is Eating the World, which does a great job tracking the status of AI litigation.

 


The discussion concluded with mutual appreciation, a few shared law school memories, and a collective excitement about the future of the legal area they’re all so passionate about. And it left a thought: Maybe the comparison of practitioners of AI and tech law to Western cowboys of yore is not the most apt metaphor. Perhaps they are more like the field’s sentinels, standing watch over a domain whose landscape is changing rapidly and constantly, in order to best advise their clients and keep their students firmly on the precipice of what’s next.