2019 Conference

Watch Video

Closing the Loop: Creating a Bias to Action in Banking and Beyond

Presented By:

Carl Ryden - CEO & Co-Founder, PrecisionLender

Download Slides

Description

Gather data from each action, learn from that data, and then coach relationship managers on how to use those learnings to take action, and achieve better outcomes. Sounds simple, right? But it often isn't. Carl Ryden, CEO of PrecisionLender, will share and explore the Applied Banking Insights loop. He'll explain how banks can ensure their data gathering and analysis provides a pathway to action for bankers, rather than gathering dust in a silo somewhere in a remote corner of the bank.

Transcription

Thank you all for coming. This is the fourth year we've done BankOnPurpose. It's the second year I've spoken. I've managed to avoid it for two. Then I got drafted into it last year and did a reasonably good job, so they made me do it again. Every year Brandy says, "I need a title." I'm like, "I don't know what I'm going to talk about six months from now. You come up with a title." Every year for two years in a row she comes up with a title and then I fill in the details. Each year, she's done a really fantastic job.

Last year I talked about AI in the future of banking and I ended up going backwards to 1968 to start with a story there. This year, I want to actually go all the way back to 1935. So October 30th, 1935, outside of Dayton, Ohio, the US Army Air Corps had Wright Field. The US Army Air Corps was in the long shadow of World War I and had realized, and it wasn't a universal notion within the army, but within the Air Corps, they realized that the next war, air power was going to play a huge role, particularly strategic longterm bombing, so they were running a competition among the three major aircraft manufacturers in the US at the time, Boeing, Martin, and Douglas for the next phase, the long range bombers and all of the aircraft executives and the Army brass had converged on Wright airfield in Dayton, Ohio on this foggy fall morning.

Everybody there knew though it really wasn't a competition. Everybody knew who was going to win. Boeing had an airplane called the Model 299. You probably never heard of it, but this airplane was absolutely gorgeous. It was aluminum alloys, shiny body. You had four engines where every other plane had two. In all the previous trials, it had completely trounced all the competition. This wasn't going to be a competition between these guys. It was going to be a slaughter. It was going to be a coronation. The Air Force was going to order 65 planes, which in that day was a huge order for any aircraft manufacturer. The Boeing 299 could fly faster than any bomber. It could fly twice as far as any bomber. It could fly twice as high as any other bomber and it could carry five times the bomb load that the Army had put in their specs that they needed as a requirement.

This was going to be a massive win. All the aircraft manufacturer execs were there to see it. Even the ones who knew they were going to lose and the Army brass. That day the 299 was piloted by this guy, it's Major Ployer B. Hill, Ployer P. Hill. He is one of the Army Air Corps' most seasoned and experienced pilots. He had probably one of the most pilots in the history of human air craft, of human flight at that point. He had four additional crew members on the plane for a crew of five. Everybody was in the stands. This beautiful plane rolls out with its four huge engines where every other bomber had two. You could feel the power of this thing. It went down the runway, took off, soared up to 300 feet, stalled, faded back to the left, and crashed in a fireball.

Two of the five crew members were killed, including Captain Hill, and of course the Army Air Corps conducted an investigation. The investigation revealed the problem was pilot error. What had happened, as the plane was getting ready to take, before the pilots got in it and the crew got in it, this plane was equipped with pins that held the control surfaces in place when the plane was on the ground so that the wind doesn't cause damage to them. They had forgotten to remove those pins, those safety locking pins, so they actually could not fly the aircraft and the stick wouldn't move.

This plane was actually substantially more complicated than any other plane that ever been. It had four engines where you had to control the fuel air mixture just right. It had hydraulic controls of the control surfaces. It also had variable pitch propellers and all of this complexity made it very difficult to remember all the things that you had to do. So of course they forgot this one critical thing, and it cost them a lot. A newspaper deemed the plane too complex for a human to fly. It's just too many bells, too many whistles, too many things going on, but all of this complexity wasn't just complexity for complexity's sake. It was complexity that really provided all the strategic advantages I had just noted earlier. It could five times the bomb load, twice the range, twice the height, faster than anybody else.

The Air Corps declared Douglas's smaller plane with the two engine plane, the simpler one, the winner and placed an order with them and Boeing nearly went bankrupt. Luckily, some people in the Army Air Corps saw the potential and said, "We can't let this go. This plane is too valuable to our security and to our future and it provides a meaningful strategic advantage to us. We have to figure out how to manage this complexity." So eliminating that complexity and giving up that advantage wasn't an option by buying the simpler plane for these guys. Right?

So they set about to try to figure out how to manage this and what they didn't do was probably more important than what they did, more interesting than what they did. They didn't think, "Well we'll do more pilot training," because of course that made no sense. You had Major Hill, Commander Hill who flew the first plane. He's the most experienced aviator in the history of aviation. Right? We can't get more experienced than that. There's no way we could train someone to be better, so training's not an option.

They said, "Well maybe we can add more crew. Well, we had a hand picked crew of five," and one of the other things is if a war broke out, they all knew the ability to build aircraft wasn't going to be America's problem because we had great productive might. The problem was going to be training crew and having a crew to staff those airplanes, so that's the bottleneck. Adding that wouldn't help.

What they did was simple. It was elegant. It would change the future of of aviation. They developed a checklist. They put on three by five cards, pertinent to each situation, certain checklists, preflight checklist, pre takeoff checklist, pre landing checklist of here's the things. It didn't tell you how to fly the plane. It didn't tell you what to do, but it removed all the dumb stuff. It removed the chance for failure. When they did this, the US Army went on to buy 13,000 of these planes. It ended up flying 1.8 million miles without further incident, so it went from too complex for any human to fly to enormously safe.

They rebranded this plane as the B-17. You may have known it as the Flying Fortress. It went on to be an integral part of winning World War II and because they figured out how to help humans manage and deal with that complexity, that complexity gave them a strategic advantage.

Now in a lot of things we do, there's a lot of complexity in banking that's there for kind of complexity's sake and we need to eliminate it, but there is some forms of complexity that really allows us to differentiate that experience and differentiate ourselves and provide a strategic advantage. Knowing the difference between those two is really important, but then also figuring out how do we help people manage that complexity, help humans manage that complexity is a key thing.

Fast forward 75 years, January 14th, 2009 and I'm doing a little bit of an Avengers thing where we jump around in time here. January 14th, 2009, and this is the one that's probably close to home to some of the folks who are from Charlotte. 3:25 PM, US Airways flight 1549 takes off from LaGuardia airport. On that plane it was captained by Captain Chelsea or Chesley. Chesley, sorry, Chesley Sully Sullenberger. We all know miss Sully from the television show and the movie. He was an Air Force veteran with 20,000 hours of flight time, one of the most experienced pilots in the US Air fleet. His copilot was Jeff Skiles, who had equally as much at 20,000 hours of experience, mostly in a 737. He had been a captain of commercial airliners before. He found himself in the copilot seat due to downsizing, and so he had retrained on the Airbus A320.

This plane took off 3:25. 3:27 PM, they encountered, two minutes later, they encountered a flock of Canadian geese. Those Canadian geese flew right into their flight path. The engines ingested multiple geese and immediately shut down. Captain Sullenberger on the voice recorder said there, you can hear them go, "Oh shit." You know when he sees them coming and you can actually hear the birds hitting the plane from the outside. He said his first inclination when he saw them was to actually duck in the cockpit, right? They lost all power on a commercial airliner flying at 3,000 feet in the air and all of a sudden had to figure out what to do. Flying the plane on takeoff was the copilot, Jeffrey Skiles, the second in command. As soon as they hit the birds on the cockpit voice recorder, you hear Sully say, "My aircraft," and then you hear immediately Jeff Skiles go, "Your aircraft," and that's the code, that's the language they had agreed upon of how they would switch.

Now, what's interesting is these two guys, Jeff Skiles and Sully had just met. They have never flown before ever. Both of them had been captains and normally this isn't a great thing because we're going to fight for control. There was no debate, there was no argument. There was like, I got this, you got this, those sorts of things. My aircraft, your aircraft, and it got handed over.

Why did this happen? Part of their preflight checklist was to introduce each other as the crew members to get to know each other. They had discovered in the preflight and to discuss what might go wrong and what they might do. They had decided in that preflight discussion if something goes wrong, Sully had more hours flying the Airbus A320, so he was going to take control. Jeffrey Skiles had just gone through all the recent safety training on the safety features and the checklist of the A320 because he had been re qualified on that new aircraft, so he was going to take control of going through the checklist.

The other thing you noted was Sully was on the left side of the aircraft and all the things that you really want to avoid are out the left side of the aircraft, Manhattan and ultimately the Hudson River as well. 3:28, Sully, one minute later, Sully notifies the tower, "We're going to be in the Hudson." He issues the brace for impact command and he immediately hears his crew, flight attendants back behind the scenes scurrying around, readying the passengers, and following their procedures and their checklist. But Sully took control the plane. He was gliding it down to try to land it in the Hudson, but even in this, he wasn't on his own. The A320 has this really neat system they call the green dot system, which is a fly by wire control that as you're trying to glide the plane down, it actually gives you a green dot and it controls the flaps and the rotors, so if you get a wind gust, you can actually maintain the optimal angle so that you maintain air the longest amount of time.

So he was actually not even flying. He was flying the plane, but he was getting assistance from that. This system freed him up to focus on really important things like making sure he landed near the end of the Hudson where there were ferries and other boats that could rescue the passengers, notifying the Coast Guard and the other authorities to meet him there, he was going to be in the Hudson. Then also keeping the wings level as the plane hits the water. That's really important so it doesn't tip and roll.

3:30 PM, five minutes after take off, but the five minutes is less important than the three minutes from the time you hit the birds to the time you end up in the Hudson. The plane touches down in the water and within three minutes the entire plane is evacuated. All 155 passengers are safe with only minor injuries. It goes down in history as one of the greatest ditchings in the history of public transportation of public commercial aircraft.

Now I tried to get through those slides in the amount of time that captain Sully and his crew had that day, and I think I failed. I think it took me longer to tell the story than it actually took to take place in real life. After the incident, the media tried to make Sully a hero because the media likes heroes, right? "It had to be something innate about you. The fact you used to fly gliders or that you are a safety expert, you're the hero of the story." He continually deflected that and said, "It wasn't me, it was my crew, and in fact, flying gliders doesn't really help you fly on a A320. It's a completely different thing." And it was their crew and the procedures and the training and the checklists that we followed.

After the incident, as you'd expect the NTSB, and for those from different countries, the NTSB is a National Transportation Safety Board, did an investigation as they should. One of the things they did is they asked pilots, they put fresh pilots in the simulator, had them run through the simulation, and they told them when you hit the birds at 3:27, at two minutes into the flight, immediately try to turn back and land the plane in LaGuardia, because if you could land the plane back at the airport, you saved the plane, you saved the passengers. It's it better than putting it in the Hudson and trying to see if Sally made the right decision.

When they ran this, seven out of the 13 trials, 53% actually made it back to LaGuardia. Well that's pretty good. However, the other six, they crashed into buildings in Manhattan and everybody died. So back to Rory's discussion about coming up with the right answer, but what they also figured is no one would know immediately upon hitting the birds that that's what they need to do. Once they added a 35 second delay just for a human to make that decision and assess that situation, zero of the 13 made it back.

So, cool stories, Carl, appreciate it, but what does this have to do with creating a bias for action in banking and beyond? I think it actually has a lot and that that's why I've kind of leaned on those stories. Last year when I spoke here, I told the story of how we, the journey that we've been on at PrecisionLender and kind of how we've learned things, building Andi and thinking about how we build what we build. When we started we started with this highly interactive user experience that would provide these, oddly enough, these little green dots that would help folks navigate the deal, very similar to the ones that helped them steer the plane, less consequences, but still help folks navigate that.

We built this because I hated systems where you'd fill out a screen, hit next, fill out another screen, hit next, fill another screen, hit next and it will say, "Sorry that doesn't work," and then you'd inflict that on the RMs and can't you just tell me the answer. Don't make me guess, just do this for me. You have a computer at your disposal. That was my argument I made to the team and why we built it the way we built it, that it would constantly just give you the answers, the best information we have, here you go.

Then we met this guy and I have a superstition. I think almost every presentation I give in public I've put Andy's face on the screen and I tell him this. When we met Andy Max, we saw firsthand the power of coaching, of being able to actually coach an RM to do something different. "Hey, I saw Joe last week do this, and he chose the zig and told them to think about zagging and it really helped him. Jim, you're doing the same thing. Why don't you think about this?" That's really what inspired us to build Andi and embed Andy Max basically into our system. How do we actually build someone who can look over the RM's shoulders, see what they're seeing, see what they're doing, and then coach them to do better things.

From the beginning, this entire journey back when I started working with banks many years ago on this particular problem, I kind of just thought about what am I doing here? What are we trying to do? It's more than just calculating numbers. It seems like there's a lot more going on here. The behavioral aspects, the conversational aspects, the conversation that you enable between the RMs and their customers, and I had this simple guiding principle that the machine should do what the machine does well so that humans can do what humans do well, even better, right?

The machine is there, calculate the answer for me. You can do that. You've got a computer at your disposal. Last year, I shared with the audience several of things we learned along that journey and this was one of them, and this is one that really I think hit home with a lot of folks and I've given this presentation or a shorter version of it around the world in a bunch of different places. This is one folks lean on the lot, is that what we discovered is nobody really wants artificial intelligence. They just want intelligence, right? And artificial is great if it has to be artificial we'll go artificial, but there's a lot of intelligence that's just latent in the bank that's there that we could deploy in putting on the right eyeballs at the right time.

But as we continue this journey, as we were building, it was we allowed folks to build skills for Andi and we asked them to ask us to build skills. A lot of the skills that people were building, really, when you pull back the covers, it ended up being almost checklists, right? Just simple checklists and I found myself as I was talking to bankers, if you could get every one of your RMs to follow a checklist of 300 points, and your loan officers, whoever's using the tool, to follow a checklist, a 300 point checklist with the discipline of an airline pilot, would that be a good thing? Would that produce better results? And the answer is almost universally yes.

The next question is, could you get them to do such a thing? The answer is almost universally no. But what if instead of a 300 point checklist that you hammered in their  with training, it was very much like the checklist for the B-17, very contextual. I don't try to give you the 300. Here's a hundred different checklists that are each three long that you only show up at certain times and only if you're not already exhibiting those behaviors.

What a lot of these skills ended up being that we see people building are simple checklists or dynamic checklists that are delivered to influence behavior. This actually made me rethink our guiding principle and the guiding principle, remember I said do what machines do well so that humans can do what they do well? I restate that and after thinking through this, and the system's supporting the human. It doesn't have to be machine. It could be a printed checklist. It could be other things.

A system that you use to support humans should do everything that humans do poorly so that humans can do the things humans uniquely do well. This is really the essence of building tools. For years, humans have built tools and we build tools that compensate for our weaknesses. We don't have claws or fangs. We build spears, we build arrows, we build things to compensate, and it turns out humans do a lot of things really poorly, right? I'm a human, so I can say this without, I guess feeling bad. We have terrible memories. We're terrible at complex mathematics. We're really horrible at assessing probabilities, and we don't deal well with complexity, even the best of us.

Then if you add all to this stress, ambiguity, distraction, emotion, anger, fear, greed, lust, the seven deadly sins. I think they're called the seven deadly sins because the seven deadly sins create emotions that amplify the things that we really stink at, which will make them deadly. The checklist on the B17 and the A320 fly by wire that Sully had, they were systems engineered to do what humans do poorly, right? They were compensating for weaknesses, the poor memory, the complexity, those things, and they really stinking work. They do that so that the humans can focus on the things that they really need to do.

You really needed Sully to be able to focus on making the decision of where to put that plane. It's easy in a spreadsheet, to do the calculus and think about, "Well there's a 50% chance if I make these assumptions of the plane getting back to New York." If it's me, I don't want an algorithm making that decision. I want a human making that decision, and in fact I want that human to be the person who's sitting in the front of the plane making that decision, right? So what you want to do is remove all the noise, remove all the friction, remove all the things that would cloud them and you want them to be able to exhibit human judgment. That's what we talk about amplifying the human loop.

Sully had less than a minute to make that decision that would affect 155 lives, including his own, and after the aircraft hit the geese, you could actually hear in the cockpit voice recorder Sully and his copilot, Skiles, calmly but quickly going through their checklist, trying to restart the engines and then also while they're trying to restart the engines, they're preparing for a water landing, closing up hatches and other things to make sure, and notifying the coast guard.

The checklist didn't make the decision and that's the clear thing in terms of the design of checklists. The checklist didn't make the decisions. The green dot didn't fly the plane. Sully flew the plane. What happened is these things provided the foundational safety for them to make the decision. They didn't create a bias for action, but they created an environment that's conducive to folks taking action because we've eliminated. You know I've got the best knowledge that we have behind us right now and you actually hear Sully at the end. He goes, they finished their checklist and Sully turns to Skiles to say, "I think we're going to put in the Hudson. Do you see any other alternative?" He actually said, "You have any other ideas?" And he goes, "Nope," and then the decision was made. So by actually taking all the things, they're able to stand on the shoulders of every accident that's ever happened before and make sure they've covered all those bases and then made that decision. I think it was really quite good.

A lot of what I told you today comes from one of my favorite books of all time. It's called the Checklist Manifesto, or it's influenced by that. Some of the stories are, we dive a little deeper, but a guy named, Atul Gawande writes this book and in the book he tells this wonderful story of some philosophers from the 1970s talking about the sources of human fallibility and they break it up into three pieces. One is what they call necessary fallibility. Necessary fallibility is we're human and there's just some things we can't do, right? We'll give it our best, but we can't do some things and that's kind of beyond our control.

The two pieces of human fallibility that are within our control are ignorance and ineptitude. I know this sounds like harsh words, but let's give them some meaning. Ignorance is you don't have the knowledge or access to the knowledge that you need it at a point to make a decision. and it may be that you don't know what causes you don't … A patient presents with some symptoms and you don't know that those symptoms map to this disease. You don't have access to that information and you make a mistake. That's ignorance.

Ineptitude is where you had access to the information, you had access to the knowledge, but you just didn't use it. You didn't apply it at the right time or the right place. That's ineptitude. What's interesting about this … So I'll tell you a little story. Imagine you bring your daughter into the ER and she has pain in her abdomen shooting down to the lower side, swelling and low grade fever. There's some other symptoms. When you do that, the doctor's going to quickly notice that that's probably appendicitis and begin treating your daughter, right? He had the knowledge of what those symptoms mapped to and it was applied in exactly the right moment, so we passed both tests.

Digital technology, mobile phones, the internet, the world wide web, Google, Wikipedia, all these things, WebMD, have now given us access to a vast array of knowledge, and we've moved from a world where human failure was once dominated by ignorance, right? Used to, you, a hundred years ago, you could go into a hospital and they might not know that those symptoms map to that disease. Now they know it. There's places on this earth right now, they might not know and the problem will be ignorance. Right now with mobile phones, knowledge is everywhere. We've moved from a world where the primary mode of human failure has moved from ignorance to ineptitude, not applying it at the right time, at the right moment where it can have the most impact.

So what about AI and how does this affect things and how does this fit into this? This is the definition of artificial intelligence. I read it a lot. If you look it up, it's the theory of development of computer systems able to perform tasks normally requiring human intelligence, blah, blah, blah. It basically says whatever computers can't do until they can, which also makes it a little bit kind of, always over the horizon, but what's interesting about this is it really focuses on teaching a machine to do the things that humans can do, right? The aspirational goal of this definition is to build machines that can do things humans can only do, vision, understanding, those things. All that's cool. It's good stuff. We use the natural language recognition in Andi, but it's not about what teaching a machine to do what a human can do.

The most powerful use cases are teaching the machine to do the things humans can't do, right? Surround the human with a system that focuses on the weaknesses of the human and insulates us from that so that we can actually do the things that we as humans are uniquely capable of doing, trust and empathy and judgment and those things, the things that Sully really displayed. The other thing that's pretty interesting about this, it's about solving the ineptitude problem, not the ignorance problem. Right? And what's really interesting about this, I think the ineptitude problem is actually a much easier problem to solve, right? We've got no mechanisms for solving that, proven mechanisms for solving that with checklists, with simple guidance, those things.

So what about intelligence? I developed a certain language about how I think about things and we'll talk about it in a second, but on the left is a cover page of The Economist. I don't know how many of you've seen this. In 2017, The Economist basically declared data is the new oil and when they did, I think this triggered a lot of executives in boardrooms of corporations and banks and everything to have what I call their Jed Clampett moment, right? "Oh my God, we're sitting on this. We've got a lot of data. We must be rich and not know it, right?"

The Economist metaphor, I think was far better than they even knew at the time. A lot of the data that exists within large corporations, banks being some of them, is really like the Canadian tar sands of data. When you look at oil, there's proven reserves, which you know you can actually get to. There's actually, there's some guys from Hancock here who actually know about oil. There's proven reserves, you know you can get to. There's economically recoverable, oil that you can actually recover and still make money in doing so, and then there's technically recoverable, you could but you probably don't want to.

Then there's a bunch of other oil and the amount of oil that's actually economically recovered versus all the oil that exists under the ground is a small fraction. Likewise, there's data within organizations that is probably technically translatable into valuable information and intelligence, but not economically viable to do so. So the way I think about the hierarchy here is I start at the top. Data is the facts, the who, what, when, and where is the set of facts around what happened. If you give those facts context and a little bit of a narrative now, and you get enough of them, you can get information. So information is like a time series, a trend. More of this is happening over time. Less of this is happening over time. This is going up, this is going down. This is becoming a greater percentage of things. That's information.

Intelligence gives it even more context. It puts that information together with a critical question, and it gives … Intelligence has potential value and it has potential value in what I think is two ways. One is it can increase the probability or the magnitude of a good outcome, right? The second is it can decrease the probability or the magnitude of a bad outcome, but it only has that potential. Here's where it takes a turn. Intelligence can go in one of two places, insights if it's applied at exactly the right moment at exactly the right time. That intelligence is, you unlock the, the potential value of that intelligence. At the wrong moment, at the wrong time, it's trivia, right? So think about, I like that. This is one of my favorite slides. I showed this to some folks and young folks on my team and they had no idea who the person on the left was and I think their life is worse for that.

But go back to the previous example, knowledge that pain in the lower right side and a low grade fever and swelling in the abdominal maps to appendicitis. If you take that as potentially a valuable, because it can really decrease the likelihood and the magnitude of a bad outcome, your appendix rupturing and you dying. In the ER, in the hands of the guy on the right, it's almost priceless. It's an insight. In a bar over a beer with the guy on the left, it's trivia. The value of intelligence is really contextual and it really matters when it's applied.

Unfortunately, I think a lot of reporting tools and business intelligence tools at their best produce exactly what they say they're going to produce. The ingredients match what's on the label. They produce intelligence and that's valuable, right? I mean, there's value in that and intelligence, think of it as the opposite of ignorance. It's the light that shines that removes ignorance as a failure mechanism, but it creates the potential for value. How do you take that intelligence and make it really, really valuable is you apply it in the moment where it can actually be most valuable at changing a behavior, particularly providing it in the moment in a way that actually accentuates a human, what you're asking a human to do.

This is what leads us to how we've come to think about PrecisionLender and what we're doing. We think about it as applied banking insights and we chose those words purposefully. The idea is you center everything around your clients and your goal is to make them better. At the top, we put the human experience. We need the bankers, and it really leans on bankers for commercial banking, it's going to lean on bankers for a long, long time and it should. We need them to have knowledge and experience and trust and empathy and we need them to be human, things that they're only capable of being. What we want to do is take away all the hard stuff. Asking them to calculate Basel III capital in their  is just a dumb idea. It's like asking Sully to kind of fly the plane with 14 controls. You can't do that, so we try to give them a great user experience.

Now what's also interesting is that's where we started and then we discovered from working with Andy that the value of coaching is huge, and I talked a little bit about that last year, which is the left side and Andy really lives in that orange area. What we also noticed is a lot of places with Jed Clampett data's the new oil, a lot of corporations, a lot of banks would start at the green, the teal box on the right side, and they would have big data projects and data warehousing projects and they'd spend billions of dollars on this and have nothing to show for it. They go, "Well, we got to do something." So then they'd actually hire a data science team and a bunch of analysts and they then start developing insights in the form of reports and other things, and building out, delivering intelligence.

The problem is they still had nothing to show for it. You have to have a way to connect that last mile. How do we connect that intelligence into the moment where it can have the most impact and really unlock all of its value and that's where we came up with coaching. Now, most folks, I think you do this wrong, start at the data and try to get that way. What we did is we started at the blue, at the top at what do we need the human to do? What do we need? What's standing in the way of them being human? Back, go around the other way. We then built Andi to deliver coaching to do the math for them, do their homework for them, do look ups for them, funnel this stuff for them.

Then we go back and go, "Okay, we need more insights to drive better coaching. What's the data we need to support those insights?" Then we get that data. Then you're assured you're adding value at every point and you just get around the flywheel faster and faster and faster because ultimately if you have better humans, you end up with better data. You have better data, you get better insights, you have better insights, you have better coaching, better coaching gives you better humans, and you start going around that flywheel.

Everything you're going to see us do in the coming year is going to be tied back to this. We have PrecisionLender at the top. We'll continue to invest in. We have Andi on the left, which is the coaching, it's the delivery mechanism for the intelligence. And then on the lower right to insights and data is really what we call our PrecisionLender L3 platform of how do we actually take all that data and pull it together? That's all I've got. Thank you guys.

Watch the Video "Closing the Loop: Creating a Bias to Action in Banking and Beyond (Clip) - Carl Ryden"

Watch Now