Convergence Ep3: Meeri Haataja – AI Ethics & Governance

In this episode Meeri Haataja, the CEO & co-founder of Saidot, joins Convergence to discuss how her start-up approaches ethical issues relating to artificial intelligence and the governance of AI. As the use of AI in dispute resolution becomes increasingly relevant, entrepreneurs focused on AI ethics will be at the forefront of addressing novel challenges. As Lao Tzu said, “difficult problems are best solved while they are easy.”

“Convergence” is a bi-weekly, limited series of conversations with thought-leaders and practitioners at the intersection of dispute resolution and technology. Host Oladeji Tiamiyu will focus on such topics as the role technology has had in resolving disputes during the pandemic, various ways  technological tools have historically been incorporated into dispute resolution, and creative use cases that technology presents for resolving disputes into the future.

Host

Oladeji Tiamiyu

Guests

Meeri Haataja is the CEO and Co-Founder of Saidot, a start-up with a mission for enabling responsible AI ecosystems. Saidot develops technology and services for AI risk management, focusing on transparency, accountability and agreements on AI.

Meeri was the chair of ethics working group in Finland’s national AI program that submitted its final report in March 2019. In this role she initiated a national AI ethics challenge and engaged more than 70 organizations commit to ethical use of AI and define ethics principles. Meeri is also the Chair of IEEE’s initiative for the creation of AI ethics certificates in ECPAIS program (Ethics Certification Program for Autonomous and Intelligent Systems).

Meeri is an affiliate at the Berkman Klein Center for Internet & Society at Harvard University during academic year 2019–2020 with a focus on projects related to building citizen trust through AI transparency as well as developing certifications for judicial AI systems. Prior to starting her own company Meeri was leading AI strategy and GDPR implementation in OP Financial Group, the largest financial services company in Finland. Meeri has a long background in analytics and AI consulting with Accenture Analytics. During her Accenture years she has been working in driving data and analytics strategies and large AI implementation programs in media, telecommunications, high-tech and retail industries. Meeri started her career as data scientist in telecommunications after completing her M.Sc.(Econ.) in Helsinki School of Economics. Meeri is an active advocate of responsible and human-centric AI. She’s an experienced public speaker regularly speaking in international conferences and seminars on AI opportunities and AI ethics.

Resources

Berkman Klein Center for Internet & Society at Harvard Law School

Review: Neuromancer by William Gibson

Saidot

Transcript

Oladeji Tiamiyu  00:01   1.2.3.4. Welcome to “Convergence” with Oladeji Tiamiyu.

So, part of my aspiration with this podcast is to have cross-disciplinary conversations between technologists and the dispute resolution community. From my perspective, the use of artificial intelligence and dispute resolution systems will only increase in relevance in the years ahead. So, because this is such an important topic, the next two episodes will focus on AI ethics and governance practices of artificial intelligence. So, this episode will be with Meeri Haataja, the founder and CEO of Saidot, a startup focused on providing AI ethics solutions to public and private sector actors. In addition to the work she does with Saidot, Meeri is also an affiliate with Harvard’s Berkman Klein Center, and the chair of IEEE’s Ethics Certification Program for Autonomous and Intelligent Systems. Alright, let’s get to it. Meeri, welcome to “Convergence.” I know you are all the way in Finland. So, we have significant time zone differences. And it, just, it means a lot that you were willing to take your time out to chat with me today.

Meeri Haataja  01:39

Thank you so much. This is an honor. It’s a very beautiful, summer day. Very hot actually [chuckles]. Yeah, this is great time, and I really appreciate the opportunity. So, thanks

Oladeji  01:53   Great, great.

Meeri  01:54   for having me.

Oladeji  01:55   Yeah. So, this is a bit non-traditional. I am, I’m going to start our conversation with a book recommendation. Actually, immediately before this conversation. I’ve been reading this science fiction book called Neuromancer. And it’s by William Gibson. It’s written in, I think, the 19, the mid 1980s. And it’s before artificial intelligence was really put into reality. And the book explores certain AI concepts, and it even envisions potential use cases for the internet. And I don’t want to give any spoilers but artificial intelligence in this book has a life of its own. And the internet is like a virtual reality system that you can plug in and out of, so it’s just been a great book that I’m reading right now. And I would, I would strongly recommend if you, if you’re into science fiction at all.

Meeri  02:54   Very interesting. Yeah. I would love to, love to. So, is it, so what kind of future that it outlines? Is it dystopia? Or, or something amazing, or something in between?

Oladeji  03:09   You know, I think the beauty of it is, it’s somewhere in between.

Meeri  03:14   Ok.

Oladeji  03:15   There are elements of it that I feel are dystopian, even, like the background with how different cities are structured. There’s, there’s certain like inequalities at play that play on dystopian themes. And then there are certain utopian elements of it where, for instance, like being able to travel from different parts of the world in, in a short amount of time due to advanced technology. So, so there are elements of it that I’m like, wow, this, this would be so nice. And then there are other elements like even with AI, there’s a scene where the police officers are trying to arrest AI, like an AI system, because the AI system wants to like go out of control. So yeah, yeah, it’s, it’s a great book, and it is both dystopian, and then it has some elements of like a utopian vision, I would say.

Meeri  04:22   All right, that sounds great.

Oladeji  04:24   Absolutely. So Saidot, I, I’d say Saidot is doing like such exciting things with AI ethics. And I feel like building off of Neuromancer with this book, AI ethics has like such an important role to play in managing artificial intelligence. So maybe my first question for you is just getting a better sense of what attracted you to this emerging field of AI ethics and some of the problems your startup is trying to address.

Meeri  05:01   Mhm. Yeah, so the story, I guess it starts from, before we started Saidot. And all the work that we have been doing, the two founders of Saidot, in this space of AI. So, so my background is really in developing AI, taking AI into use in different business processes across different industries, financial sector, media, technology, so forth. So, so that was the background, and then, then GDPR, the data protection regulation of Europe changed a lot, how we’re thinking about these questions. And so that was, was a  major influence and through that I actually got involved into AI ethics discussions. I came to realize, first looking at my own thinking, and how I’m, like, you know, looking into, into the impact that we’re doing, doing with technology, and then looking at others, and I came to the conclusion that, ok, a lot is happening. The influence starts to be some, like, you know, really significant, and it’s only growing, and majority of us, like, you know, don’t see all the influence that we are having, and all the impacts that we are, we’re having in, in people’s lives. So, so it was sort of awakening, first on, on how limited understanding I had in my own role, and then figuring out that, okay, this is going to be, this needs to be sold in some way. We need to, need to be able to shape how we develop this, this systems, seeing the impact further than the first immediate goal that our system is solving. So, so that kind of thinking initiated the process. And I got involved with a lot of national activities, national AI program, I was leading the ethics working group over there, got to discuss with a lot of organizations in, in public and private sector about these topics. And then we saw a lot of international cases, incidents happening out there. And I got really passionate about finding very operational practical means for helping organizations to systematically address these questions. So, so that was . . .

Oladeji  05:04   Yeah.

Meeri  05:06   . . . the story, how it got started. Yeah.

Oladeji  07:38   That’s great. You know, I definitely feel like the, from the private sector, like the need for clear, digestible AI ethics, like governance really strong right now. And you also mentioned how your interest was impacted to a certain extent by GDPR. And I think that’s interesting, because there’s the regulatory motivation for companies to adopt better AI governance practices. And then, you know, I was actually reading one of your papers, where you talked about how, yes, the regulatory factor is certainly relevant for these companies, but also the financial risk, right? When, when users and customers distrust the company because of a lack of accountability or clear governance practices, then it’s going to be a financial risk factor for the company. So, so I was actually really curious to just hear a bit more about, it does, it could be conversations you’ve had with executives, or just how you envision their thinking about this concern around financial risk based on customers distrusting their AI ethics practices?

Meeri  09:07   Yeah, I think there has been also some, some studies around this. This topic what, what is driving companies in initiating AI responsibility, related activities and, and based on those, I think it’s quite systematically, and it makes total sense because we don’t have those regulations in place, which, yet, which would basically require this kind of work in, in that scale that we might be seeing, seeing for future. So, the driver is, you know, based on my experience, and based on those studies that I’ve been reading, the, the trust, the business reasons, and at the heart of that business reason is basically establishing or further developing, maintaining trust between the organization and its key stakeholders. Most importantly, of course, customers, but also employees, for example. So, and I think that’s a very good motivation, that, that is at the heart of the business model. And in acting based on these motivations, you’re doing it really, that’s about maintaining your capability to operate in your business model, and not doing it just for compliance [chuckles]. It’s just like, you know, it’s way more fundamental motivation, if it’s at the heart of your business model, and about the trust between you and your customers.

Oladeji  10:43   Yeah, yeah, I agree. And in terms of maybe best practices, recommendations for you, how can some of these companies foster greater stakeholder trust in their AI practices? Like, I know you’ve mentioned a certain level of transparency, but are there, are there other best practices that some of these companies should be thinking about?

Meeri  11:19   In general, I think it’s very hard to think about AI ethics or responsible AI without figuring out new ways and good ways for engaging with your stakeholders. So, in a way, I think that’s sort of a core principle foundation that you cannot do good ethical decisions in isolation from your stakeholders. So that’s something based on my experience, how we’re, when we’re working with customers, that’s one of the really areas that we’ll try to find good ways and try out different ways of, first of all, understanding who are the people we’re influencing. Who are, who are the stakeholders here? And then, then thinking about different ways. What are the meaningful ways for, for engaging those people in the process? Have a say, let them have a say on how AI best services, AI technologies are influencing their lives, opportunity they are provided, and so forth. So, I think it’s, we can actually think about it from this whole, like design thinking perspective, there are a lot of very good practices. If you’re using design methodologies that are widely applied in any kind of technology development to the space of AI, I think this is something that isn’t that new as we, we often think of it. If you look at it from that product perspective that, that when applying design thinking in a new development, that always considers the users and gives voice to the user. So, I think we should consider those, those means more in the AI context as well.

Oladeji  13:18   Absolutely. Yeah. In one of your papers, to quote you, once again, you, you wrote a, if my memory serves me correctly, you wrote, “one should start with ethics principles, since the ethics of an AI system are the breathing values of our algorithms.” And I like that because it illustrates, you know, like these algorithms, sometimes there’s this pressure to think of algorithms as a black box. And, and that’s usually from the perspective of a stakeholder who doesn’t have access to a company’s code. And that can be really detrimental for stakeholders, right? It can undermine their trust, willingness to engage with that company. So, for me, I’m always wondering, you know, historically, and I know this is an emerging industry, but historically, there seems to have been some kind of intrepidation or lack of willingness for companies to be more transparent with the algorithms that are written. And my perspective is that, that could be because of this proprietary pressures, you know, like, these companies, they hire these engineers and developers to write these codes. And if they’re too transparent with it, then a competitor can come in, you know, and then there’s this open source nature of I would say, millennials, right, like GitHub is so popular for this open source spirit. And I feel like it conflicts or competes with this obsession with proprietary information. Do you see this open source versus proprietary culture changing at all?

Meeri  15:25   Yeah, that’s, that’s a super interesting theme. And yes, in general, I think we need to challenge that, that notion that you very well described, about protecting our IPS and so forth. In general, I’ve been trying to think about the benefits of open source and, and that way of working and how to, how to sort of use the same or use some, some of that approach in the AI context, apply that into AI transparency, not necessarily trying to make AI open source [chuckles].

Oladeji  16:14   [Chuckles]

Meeri  16:14   Across all industries or companies like I think that’s not what we are, what is realistic from companies or industry perspective. But sort of that, that idea that with that transparency, exposing your work into whether it’s limited audience of trusted stakeholders, or then open public that allows you a very fast feedback loop. And that allows you to see the problems much sooner than you would possibly see if you just keep it for yourself. And then if you are, if you’re welcoming that feedback, and if you are agile enough to actually be able to respond to that one, I think that’s, that’s very interesting recipe for future. And that’s where I believe we need to go in order to be able to really properly, properly govern these increasingly complex, complex technologies.

Oladeji  17:14   Yeah, I agree with you. And, you know, this, this podcast episode is focused on AI ethics. And I also think, for our listeners, it might be nice to also explore the benefits of artificial intelligence, you know, and you mentioned you, you have a background before coming to creating Saidot you, you had a background already with this industry. So, I’m curious for you just some of the benefits that artificial intelligence presents in, in different industries?

Meeri  17:54   That’s always a difficult question. Because it’s thought to be in like, you know, across, we apply AI in so many, like, all industries, so it’s about your own favorites, that you happen to love. Oh, of course, I’m always bringing in these kind of, like, you know, questions I cannot imagine of healthcare, and getting health care that is not used at utilizing to large extent, all those opportunities in personalized healthcare, you know, drug development, and so forth. I think there are tremendous opportunities there. And probably all of us want to also benefit off those. And that’s one interesting question, how do we make sure that people are equally benefiting off the progress of AI in healthcare. But also, in education area, being able to support learners, individual, like, you know, getting the full capacity of person being able to recommend or support you in a learning process in the best possible way individualizing, the learning process and so forth. You can find great applications from basically any industry, I think. What are your favorites?

Oladeji  19:25   You know, for me, it’s, because I focus on dispute resolution and technology. The use of artificial intelligence in online dispute resolution is becoming more and more of a valid and effective use case with, with ODR and it’s a good example with healthcare. Because to a certain extent, you, in both, you have large data sets, right like you have the possibility of prior disputants in a system, sharing all of that information over an accumulated period of time. So, you have that data set. And then with medicine and healthcare, you have all of their medical information. And you can use that to spot trends, patterns that human intelligence would really struggle with. So, with online dispute resolution, there’s the hopes and aspirations of preventative dispute resolution to a certain extent where, when you’re, when a platform is working, especially in e-commerce, you can give guidance to merchants on a platform, letting them know that, you know, if you go down this path, or you continue taking these business practices as a merchant, odds are you’re going to end up in a dispute with a buyer. So, so I think online dispute resolution is increasingly recognizing all of the benefits of AI. And even with alternative dispute resolution with mediators, for example, AI still has that, what I describe, as a comparative advantage to, to understand, based on rules set by a developer, understand large swaths of data that a mediator would really have to spend months, years trying to understand, and AI can do that in a matter of seconds. So, to me, you know, I’m excited about the prospect of AI with dispute resolution systems. And, and as you’re pointing to, with the governance aspects of it, in online dispute resolution, it’s even more important because we are trusting potentially legally binding outcomes to an algorithm that, to a certain extent, users of the platform don’t really know what is going on, right? They just kind of submit to it. So yeah, that’s kind of how I’m thinking about it. Do you have any concerns with AI and in courts or with alternative dispute resolution?

Meeri  22:28   Um, yeah, I agree with you. It’s, it’s a fascinating area. And I definitely think we need to, in this whole development of e-commerce and how, how we are basically facilitating so many different processes with technology. Everything is going digital, so, so we also need to find ways for, for solving these situations. So, yeah, I agree with you. It’s, it’s a fascinating area. What fascinates me not only from that use case area, or application area for AI, but also applying the same ideas. What has been developed in dispute resolution, or online dispute resolution into the ethics, problems or questions? And could we build similar kind of setup for finding good solution, solutions to different, concrete ethical questions in relation to how we use AI, and individual decisions done by AI and so forth? So yeah, this is interesting.

Oladeji  23:41   Yeah, and I know, in, that, we kind of before the recording started, we explored this equation, if you will, that greater stakeholder engagement leads to increased trust and more avenues for accountability. And, and I find this, this equation, for me to be so interesting, because at least with online dispute resolution, the need for accountability for the owner of the platform is really important, right, like users of a dispute system are, I think, more likely to trust it if there’s some level of accountability that the online dispute resolution platform has to the users. And yeah, so that, that was just something that came to mind and I was curious for you like with companies that you’ve been working with or even governments, the level of openness that they have in being accountable to stakeholders.

Meeri  24:54   Mhm. Yeah, these two principles, transparency and accountability, those are my favorites. Those are basically on which we have built whole concept of Saidot. And we try to facilitate, enable ways for, for working, bringing this transparency, using transparency, to facilitate enable accountability. So, and this whole stakeholder engagement is obviously very much related to a, you need to have transparency or something that you communicate about how your, your decisions are done, or how your technology works in order to be able to engage and get feedback. So yeah, those are, I think those are sort of foundational principles of AI ethics they enable so much. They enable, understanding whether we have, whether we are having non-biased systems or like whether we’re having problems with equality, and, and, and it enables human oversight, and so forth. So, so I really feel that this equation that you will, really well laid down, there is so much in it. I agree that first it starts from the developers and the owners within the organization, they need more transparency, they need to understand how the system, the systems work, so that they are able to carry the responsibility. Hold, hold themselves as accountable. Because it’s really hard if you don’t know what you are accountable for. So that’s where we always, very often start with to bring more transparency for the organization itself. For example, the business owners, they need to have better means for, for understanding how the system has been devolved, what kind of decision it’s doing. So that’s, that’s where we start. And that’s, I think, where it needs to start before going to external stakeholders, you need to have the, the internal stakeholders accountable as provided with enough data, then there are so many different external stakeholders who then, then come into picture. The people, the users of the systems, what kind of transparency we need to provide for them so that they are comfortable. They can trust that the system has been developed with their best interest in mind, and it works reliably and so forth. Then there are like, you know, there can be this kind of mediators or auditors, reviewers, who have a little bit different perspective, they need to know more so that they can actually verify that the system is working well, it’s doing good decisions and so forth. So, it’s actually a very broad topic. There are so many different parties who need to be supported in them being able to, to trust or hold the accountability.

Oladeji  28:01   Regulation of AI is so young, right? There just aren’t that many jurisdictions that have a track record of codes and statutes regulating AI. So, I was actually just curious with, it could be Finland, or just the EU broadly, current regulations that have been put in place for AI.

Meeri  28:29   Yeah, in this context, it’s, it’s definitely the latest European Commission’s proposal that we should discuss in, in this context, because that’s something that from European perspective, everyone who is following this space, and has known that this is something that is being prepared. So, I think, on a state level there, there hasn’t been any, any reason to take forwards any, any, like, you know, too much activities before hearing what ,what, what EU is planning on this area. So, so yeah, so that was a major, major announcements a few weeks ago, in April. And that’s very interesting, I think from a global perspective definitely gives a good benchmark now on how you actually could regulate this area. It’s very focused, there are a few important characters, there are prohibited cases of AI that are basically against the values, your brand values that wants to be protected, there is the definition of high risk AI use cases. And then expectations or requirements for these kind of higher risk use cases which come from so many different industries in different areas, like recruitment, education, law enforcement or judicial obligations, public sector services, and so forth. But it’s very much use case focused understanding, what are those use cases that where we have higher risk that needs to, needs to go through more comprehensive governance assessment process.

Oladeji  30:13   Yeah.

Meeri  30:14   So that’s the core, core of the proposal. And I think even though it’s obviously in the process, and is, be going through many, many forums, and parliament and so forth, so, so it already gives a very good idea about this. How good looks like, looks like from a regulation perspective. So I think it has definitely already started to influence organizations who are using AI, that are getting these sort of things that we need to prepare for this, this is the way how it can and probably will be regulated as well.

Oladeji  30:54   Yeah, that makes sense. Because when, as we talked about earlier, the application of AI is really different, depending on the industry, it’s being operated in. So, it makes sense that there would be distinctions between the use cases so that they’re, that gave a certain degree of respect for the type of industry the AI is being used in.

Meeri  31:26   And then, then there are these use cases that go across all industries, like people analytics related or recreating, for example. So, so I think that’s a, that’s a really nice way of, of laying down or identifying the areas where we need to be more cautious about, about the possible negative influences on, on safety, security or fundamental rights, that we have our AI.

Oladeji  31:56   Yeah. So, AI is kind of evolving pretty quickly. So I was, I wanted to ask you, you know, maybe two years from now, what problems do you think you’d be addressing that are different from the problems you’re addressing now?

Meeri  32:17   Hmm, that’s interesting. Two years, it’s always like this, that what is what is long time in this? [Chuckles] I don’t know. Is this long or short time? [Chuckles]. But yeah, certainly a lot, a lot has happened already, during the last few, past few years. Probably we are much closer to that, like having those regulations in place in two years. I personally hope that we are also way further in giving forum for trust between the AI-human interactions. So, so we really want to be exploring, not only on like, you know, what, how does that, how to do effectively that governance behind the scenes, and how to document things in a right way, how to analyze your risks, and so forth. That’s where we have started. And that’s where we are, like, you know, heavily working on at the moment, we want to take it more, find good and efficient ways so that any company, like companies of any size, or public organizations of any size anywhere, can have access to that high quality, top notch expertise in this area, and that you need to be solving not only hiring more people, because yeah, you don’t necessarily have all those resources to hire those experts. So we need to find more innovative way, ways, digital ways, for giving that support. But I also really hope that we are much further on, on communicating that trust giving a forum for that trust in those AI-human interactions.

Oladeji  32:25   [Laughs] Mhm, mhm. Yeah, yeah, with, with online dispute resolution, I don’t think AI will gain greater adoption, unless there’s trust. You know, there’s just, when we’re operating in dispute systems, there’s concerns around inequitable outcomes, you know, and whether AI can recognize the nuance between situations like someone from a rural part of the country or world versus someone in the city, or large city. And so that trust is just so fundamental for AI ODR to have greater recognition and adoption. So yeah, I think that’s a really important point. And it’s a good thing you’ll, you’ll be working on it two years from now [chuckles].

Meeri  34:58   Yeah, yeah. And it’s really interesting question How does it, how do we form that trust with, with the users, it probably requires these, these structures and, and mechanisms for the users to know that there has been expert reviews or like, you know, some experts now have been validating that this works well and so forth. But like, you know, how do those things come together and, and realize us trust between the user and the system? That’s really fascinating question.

Oladeji  35:33   Well, that’s actually, that’s, that’s a great point and it almost relates to a position that I think you also, another hat you wear, separate from Saidot. I’m, correct me if I’m wrong, you’re also the chair on IEEE’s Ethics Certification Program for Autonomous and Intelligent Systems. And it sounds like you know, there, you’re doing some type of certification for these autonomous systems.

Meeri  36:05   Mhm. Yeah, yeah, definitely. That was a very good reach into, into this topic. And definitely, it’s related. Yeah, IEEE is, is, is really one of the global standardization organizations, was the largest professional association for technical experts. We started to work with, IEEE has been doing amazing work in AI ethics space. So, so, AI ethics experts, everyone knows the work on ethically aligned design sort of very good reference material. So, anyone who wants to, wants to really get a full picture, big picture about all the different aspects that we are talking about when talking about AI ethics go to IEEE’s, materials on under the Ethically Aligned Design concept. So, there had been a lot of work done already an IEEE a lot of standardization also, project started on this space. But yeah, in 2018, we were having, having also discussions about, about the needs of industry and, and organizations who are deploying AI tech, using AI, and, and how do they, how are they able to communicate about their trustworthiness about their attempts, and like, you know, investments on, on AI governance and making sure that the AI is working, working reliably and is trustworthy. So, so, from those experiences, discussions, it became very clear that we also need some kind of mechanisms to be able to communicate about that trustworthiness towards the different stakeholders. And, and that initiated this, this process of starting a certification program for this AI ethics space and the focus over there, there is really the most, or the biggest, themes of, of AI ethics, which are seeing in, in all of those principles, ethics principles, that we are seeing, transparency, accountability, algorithmic bias, and also privacy as a fourth topic. So yeah.

Oladeji  38:18   That’s awesome. It’s such important work. And, you know, I’ve been for quite some time, I’ve actually followed IEEE’s research, there’s just so much and such interesting research into different tech issues. And I guess for some people who may be listening and are unsure what it is, IEEE is the Institute of Electrical and Electronics Engineers, which sounds, I’m just gonna say it sounds kind of boring. But [laughs] their website, their website is great, and the research into tech issues that they do is, is so important. And you know, it brings together so many different academics and practitioners from around the world to kind of problem solve around different tech issues. So, it’s, it’s great, the work you’re doing, and it’s just great overall, that this institute is in existence, frankly.

Meeri  39:22   Yeah, and so it’s a great opportunity, learning opportunity. And you can like you know, they are open groups that you can, where you can participate as well. So, so it’s two way learning you can contribute to, to how standards are being shaped in this area, or a certification criteria, but you also, not only have a possibility to contribute and influence, but you also learn while, while acting or like you know or working there actively with all those amazing experts from around the world. So, so yeah, it’s um…. But isn’t it funny, you said that it sounds like a little bit boring, the name [chuckles]. But that’s also actually what I’m, what I’m waiting when talking about AI. So, when it gets boring, then we are in, in some, some, like, you know, it has reached a certain stage. So, it has become part of how we just, how we just work, how part of normal, normal way of working and then that’s, that’s many times a benefit. We, we sort of focus on, on, on the real issues. And yeah, there is beauty in the boring [chuckles].

Oladeji  40:34   [Laughs]

Meeri  40:34   How do you say it? In something being boring. [Laughs]

Oladeji  40:37   Yeah, totally. And it’s actually such an interesting point, you know, at the point where, where AI becomes mundane and ordinary, then we can start to talk and explore, like, the real critical issues. And, and, you know, I feel like AI’s beauty and what makes it such an attention grabber is the fact that it can become boring, you know, like, I don’t know, I’m sure the technology is already in existence, right? You can be on the phone. I think there are regulations around this. But in general, you can be on the phone with an AI system, communicating with it. And some people might not even realize that you’re communicating with an AI system. And, and the, like, that point where you realize it is an AI system, when you didn’t realize it is, is the point of like maximum fascination, right? [Laughs]

Meeri  41:41   That’s [chuckles] there probably will be regulation for that.

Oladeji  41:45   Yeah.

Meeri  41:46   You actually need to know that you are talking with, if you ask from EU commission. But, but anyways, yeah, I think it’s, it’s a very important, important moment. Probably, we also need to, need to design the governance before reaching that moment, because it might be that it’s beyond our sort of governance or control mechanisms if, if we don’t even any more recognize what is AI and what is, what is not. So, so probably we need to start the governance and regulation a bit earlier before, before that moment. But I agree with you there is something about that moment. I think it’s seriously, one thing that is really important where I’m working a lot with our customers is really that sometimes this whole conversation about AI ethics and AI risks and incidents related to this one, it becomes one big, one big mess. And it’s really important, like difficult to even understand that what are the things that apply to our context and our use cases and so forth. So that exercise is super important to, to understand that not all AI-related risks in the world are something that you should, that are specifically relevant to your use cases in your industry. So that kind of seeing the forest out of trees, or how do you say [chuckles] we have that saying in Finnish. But sort of seeing, seeing what is important in my context? And where should I put focus and what is, what is secondary? Is very important.

Oladeji  43:24   That’s such a great point. And I do get the sense that AI, something like also blockchain technology, they’re both very sexy right now. And there’s, there’s the pressure for startups to just incorporate it in general and just say, oh, yeah, we’re using this type of technology, without really seeing whether the use case of that type of technology benefits your, your company. So yeah, I, I agree with that. And on the note of AI being, reaching the stage where it’s hard to distinguish from human intelligence, I’ve been recently reading a lot. And it’s mostly inspired by my love of sci-fi, to be honest. I’ve been reading some research into generative adaptive networks, I believe they’re called GANs. And basically, my understanding with GANs is they’re like neural networks to a certain extent, and you can provide examples to the AI system. And from those examples, the AI system is able to produce a rule and using the rule that the AI system created it can provide another example in addition to what you already provided it. Have you heard, have you heard of this at all?

Meeri  44:55   Yeah, there are these interesting, like, image. So, for example, this pictures of humans who don’t exist, and I think they are created. Have you seen that page? There is a web page for these humans who don’t exist.

Oladeji  45:14   I have. Yeah, yeah.

Meeri  45:18   Yeah, it’s, it’s really interesting space.

Oladeji  45:22   Yeah, yeah, it is. It’s also kind of scary. But it’s, it’s progress. It’s progress [chuckles]. So, kind of the final few questions, you’re actually the first guest that has come on the show since Harvard Law School had its graduation. And yeah, I was just curious, like, you know, for, for students, for law students who just graduated, what advice would you give them as, as they start down this new chapter in their life?

Meeri  45:57   Yeah. Advising Harvard Law School students, it’s really hard, hard question. But I think this future will be definitely interdisciplinary. So, I think finding opportunities where you can work with people from different disciplines that are needed in future when we’re living that life of very technology, AI driven processes and societies, I think that interdisciplinary knowledge and experience is going to be super important. And I would really prioritize opportunities for being able to work in such contexts that you can actually, actually learn more about, about those, all the other aspects that we need to, need to take into account when trying to manage these kind of social-technical systems. I’ve been wondering how little we have privacy lawyers in the space of AI ethics. So I definitely also was saying this importance of interdisciplinary backgrounds and, and opportunities to work in teams where you have those, those different disciplines present. I also really hope to see more and more privacy lawyers, privacy experts to dive into this area of AI ethics and start to operationalize, start to, to solve these questions, handle the governance, build governance mechanisms, on top of the great practices that we already have in many organizations from the data protection, privacy perspective. So really welcoming also, also, more students and on experts with legal backgrounds into the space.

Oladeji  47:53   Yeah, it’s, that’s great. And actually, part of me thought, when I was asking the question, part of me thought you were just gonna say, “Learn to code” [laughs]. But I think, yeah, you definitely touched on important pieces, especially the need to take interdisciplinary approaches. That’s just so important. And something like privacy lawyers, like, we know that more and more regulation is coming in that field. We also know that, or we hope, I’ll hope that customers and users of these systems will care more and more about their privacy as the technology continues to advance. So, I think that there is such a important need for more privacy lawyers. And that’s great advice to them. Yeah. So, with that, I just wanted to thank you so much for, for, you know, being part of the conversation for joining today. And, and I’m excited to see what Saidot is doing not just two years from now, but into the distant future.

Meeri  49:08    [Chuckles]. Yeah, thank you so much for having, having me and on for, for the conversation. Yeah, it was a pleasure.

Oladeji  49:15    Great. Thank you so much.

Scroll to Top