Good governance essential for enterprises deploying AI
Laurel: That's great. Thank you for that detailed explanation. So because you personally specialize in governance, how can enterprises balance both providing safeguards for artificial intelligence and machine learning deployment, but still encourage innovation?
Stephanie: So balancing safeguards for AI/ML deployment and encouraging innovation can be really challenging tasks for the enterprises. It's large scale, and it's changing extremely fast. However, this is critically important to have that balance. Otherwise, what is the point of having the innovation here? There are a few key strategies that can help achieve this balance. Number one, establish clear governance policies and procedures, review and update existing policies where it may not suit AI/ML development and deployment at new policies and procedures that's needed, such as monitoring and continuous compliance as I mentioned earlier. Second, involve all the stakeholders in the AI/ML development process. We start from data engineers, the business, the data scientists, also ML engineers who deploy the models in production. Model reviewers. Business stakeholders and risk organizations. And that's what we are focusing on. We're building integrated systems that provide transparency, automation and good user experience from beginning to end.
So all of this will help with streamlining the process and bringing everyone together. Third, we needed to build systems not only allowing this overall workflow, but also captures the data that enables automation. Oftentimes many of the activities happening in the ML lifecycle process are done through different tools because they reside from different groups and departments. And that results in participants manually sharing information, reviewing, and signing off. So having an integrated system is critical. Four, monitoring and evaluating the performance of AI/ML models, as I mentioned earlier on, is really important because if we don't monitor the models, it will actually have a negative effect from its original intent. And doing this manually will stifle innovation. Model deployment requires automation, so having that is key in order to allow your models to be developed and deployed in the production environment, actually operating. It's reproducible, it's operating in production.
It's very, very important. And having well-defined metrics to monitor the models, and that involves infrastructure model performance itself as well as data. Finally, providing training and education, because it's a group sport, everyone comes from different backgrounds and plays a different role. Having that cross understanding of the entire lifecycle process is really important. And having the education of understanding what is the right data to use and are we using the data correctly for the use cases will prevent us from much later on rejection of the model deployment. So, all of these I think are key to balance out the governance and innovation.
Laurel: So there's another topic here to be discussed, and you touched on it in your answer, which was, how does everyone understand the AI process? Could you describe the role of transparency in the AI/ML lifecycle from creation to governance to implementation?
Stephanie: Sure. So AI/ML, it's still fairly new, it's still evolving, but in general, people have settled in a high-level process flow that is defining the business problem, acquiring the data and processing the data to solve the problem, and then build the model, which is model development and then model deployment. But prior to the deployment, we do a review in our company to ensure the models are developed according to the right responsible AI principles, and then ongoing monitoring. When people talk about the role of transparency, it's about not only the ability to capture all the metadata artifacts across the entire lifecycle, the lifecycle events, all this metadata needs to be transparent with the timestamp so that people can know what happened. And that's how we shared the information. And having this transparency is so important because it builds trust, it ensures fairness. We need to make sure that the right data is used, and it facilitates explainability.
There's this thing about models that needs to be explained. How does it make decisions? And then it helps support the ongoing monitoring, and it can be done in different means. The one thing that we stress very much from the beginning is understanding what is the AI initiative's goals, the use case goal, and what is the intended data use? We review that. How did you process the data? What's the data lineage and the transformation process? What algorithms are being used, and what are the ensemble algorithms that are being used? And the model specification needs to be documented and spelled out. What is the limitation of when the model should be used and when it should not be used? Explainability, auditability, can we actually track how this model is produced all the way through the model lineage itself? And also, technology specifics such as infrastructure, the containers in which it's involved, because this actually impacts the model performance, where it's deployed, which business application is actually consuming the output prediction out of the model, and who can access the decisions from the model. So, all of these are part of the transparency subject.
Laurel: Yeah, that's quite extensive. So considering that AI is a fast-changing field with many emerging tech technologies like generative AI, how do teams at JPMorgan Chase keep abreast of these new inventions while then also choosing when and where to deploy them?