Each week on eDiscovery Leaders Live, I chat with a leader in eDiscovery or related areas. Our guest on October 30 was Udi Hershkovich. Udi is a Principal Business Development Manager at Amazon Web Services (AWS), a go-to-market specialist focused on artificial intelligence and Amazon Kendra.
Udi and I talked about his area of expertise, artificial intelligence and machine learning, and its role in litigation. We discussed how to make AI/ML accessible to everyone and examined what is meant by “artificial intelligence”. Udi covered AI/ML in eDiscovery, not just for pre-processing and review but also for case assessment. We took a look at graph databases and how to have a discussion with your documents, areas with phenomenal potential. Finally, Udi left us with some thoughts on how to get started with all of this.
Recorded live on October 30, 2020 | Transcription below
Note: This content has been edited and condensed for clarity.
Welcome to eDiscovery Leaders Live, hosted by ACEDS and sponsored by Reveal. I am George Socha Senior Vice President- Brand Awareness at Reveal. Each Friday morning at 11 Eastern, I host an episode of eDiscovery Leaders Live, where I get a chance to chat with luminaries in eDiscovery and related areas.
Past episodes are available on the Reveal websitewebsite. Go to revealdata.com, select Resources and then select eDiscovery Leaders Live Cast.
I am pleased to have joining us as our guest today, Udi Hershkovich. Udi is at Amazon Web Services, where he is a go-to-market manager, focused on artificial intelligence and machine learning. In particular, one of his areas is working with Amazon Kendra, and he’ll give us an explanation of what that is. Udi has both a Bachelor of Arts and a Bachelor of Laws from IDC, that’s Interdisciplinary Center, at Herzliya, in Israel. He has worked as a software engineer in the Israeli Defense Forces and at Motorola Home & Networks Mobility and he has held various roles at XACCT Technologies, Amdocs, Safaba Translation Solutions, LeanFM Technologies, and Business Imperatives LLC. Since 2018, Udi has been at AWS. Udi, thanks very much for taking the time to join us today.
Thank you, George. It’s a pleasure and thank you for inviting me.
Glad to have you. Let’s start with what it is that you do at AWS anyway? I gave a brief introduction, but of course, no real content or context. What do you do?
Sure. I manage business development for AI services. At AWS, we have a broad AI services portfolio. I think maybe in the context of explaining what I do, may be I’ll take a step back to talk about what we do at AWS with AI and tie that into my role, if you don’t mind.
Making Artificial Intelligence and Machine Learning Accessible to Everyone
To level set, AI is essentially a way of describing any system that can replicate human cognition or human decision making. Machine learning is the underlying technology which uses large amounts of data to create and validate decision logic, essentially driving AI. Now, AI has – or machine learning and I’ll use that probably interchangeably throughout this conversation – has turned from somewhat of an aspirational technology, which was used by very large companies and academia, over the years to very much a mainstream technology of mainstream applications just in recent years. It’s mainly due to the advent of cloud computing, which enabled a lot of the resources that were not available before, were very expensive to access, like storage, compute and so on, in the last years. Today machine learning and AI are used in pretty much every industry across the board. But we’ve identified that it’s still a very big challenge to adopt machine learning and to utilize AI, because there are not many people that actually have machine learning expertise, access to data is very difficult, and so on. We have decided that our mission at AWS is going to be to put machine learning in the hands of every developer, making machine learning easy, accessible to everyone, in other words, as boring as possible so that really everyone can utilize it.
We have decided that our mission at AWS is going to be to put machine learning in the hands of every developer, making machine learning easy, accessible to everyone
My role at AWS is to provide access, help companies access AI – AI services like automated transcription and translation, like machine learning powered search, and so on. You mentioned Kendra, this is my focus today with intelligent search, powered by machine learning.
Artificial Intelligence Anyway: A Prediction Process
One of the many different definitions of AI that I’ve come across, the least technical, goes something like this: “Artificial intelligence, that’s what we would like a computer to do, but we have no idea how to make it do it, we just want to do it in some magical fashion. Once the computers can do that with whatever capabilities they have; well we don’t know what it is, but it’s no longer artificial intelligence”. Sounds like you’re trying to get rid of artificial intelligence for us here.
I think it’s more down to earth than that. Artificial intelligence is essentially a prediction process, just like we do with our own human brains. We have a series of inputs and we have a very complex model in humans, it’s a neural network which we essentially try today to replicate with machine learning neural networks. It provides a set of options; it’s essentially a model.
Using artificial intelligence today, in so many applications, is completely seamless. There are many applications that many of us use today that use a lot of automated decisioning behind the scenes. Given a series of inputs, they make a decision about what is the best outcome. A great example with search: When you are entering a query, asking a question in a search bar, you get the best possible outcome or hopefully the best possible outcome and answer with a process of decisioning that incorporates natural language understanding, which is a machine learning model, incorporates maybe document ranking machine models and other types of machine models, to aid the process of making those decisions.
Artificial Intelligence and Machine Learning in eDiscovery: Pre-Processing, Review, and the Underused Case Assessment
What have you seen in terms of the use of machine learning and artificial intelligence when it comes to eDiscovery?
In eDiscovery it’s really interesting since law is close to my heart. I’m obviously a law school graduate. I never practiced but I have many lawyer friends and I see their pain very closely. I think that eDiscovery has a lot of decision-making elements when you break it down into its components. Many of them can be replaced or augmented with artificial intelligence today. For example, pre-processing, when somebody dumps an enormous amount of content into your server, it may come in the form of multilingual content, of scanned documents, audio and video files, images. All these things have associated AI services today that can transform them all into text that can be reviewed. And very quickly, very efficiently, not always perfectly, granted – humans make mistakes too, AI makes mistakes – but overall saving a lot of time and money. That’s preprocessing, that’s one step where I don’t see AI being used very broadly. In eDiscovery it should be used more, the technology is ripe for that.
Then in the review process, today we have machine learning models producing custom classifications. You can actually decide pretty well what might be classified as privilege and confidential or not, what might be decided as a responsive or not for an eDiscovery case. You can do that a lot and it is being utilized in eDiscovery, but not as broadly as I would have expected because these technologies are actually doing really, really, well. Between the two steps, between the preprocessing and the review process, there is one thing that I personally think is missing, which is the case assessment process. In order to understand what’s in front of me, what’s going on, who did what, what happened, what transpired, who is linked to whom, there are a lot of really nice solutions out there, but they make very little use of AI.
Give me some examples, especially on that third part. I think you’re right, very few people have touched on that. So, give me some examples in that third part of specific types of technology that could help and how that might play out.
I’ll focus on Amazon Kendra, which is currently my baby. What we’ve done here, is we’ve realized that search – all search technologies today are focused on what’s called keyword search or lexical search. We’re searching for identifiers, keywords. When we know what we’re looking for, it might work pretty well; accuracy is not very high, but when we know what we’re looking for, we’re looking for something very specific within documents.
But, if you think about… You’re exposed to a new case and you don’t know exactly what’s going on, not yet, the best way that people start building up their knowledge of the case today is by a hit or miss, trial and error process. We’ve built an intelligent search service that uses multiple machine learning models under the hood, including reading comprehension and natural language understanding, so you can ask natural language queries and start understanding the case and see relationships within different activities, actions, people, organizations, within the text. For example, you can ask, “When was the contract time between company A and company B?” And get an answer, an actual answer: “On October 25th 1974”. “Who was the party to that contract?”, and you get names. “Who is?”, then you see the name, you can ask another question. “What is the role of this person?” and you get an answer, “The Chief Financial Officer”.
You basically start communicating as if the case was another person that has a lot of information and you start building those links. By connecting intelligent search with graph database, we’re actually able to even see those connections live and start a true discovery process very quickly, accelerating case assessment, case analysis and being able to understand what’s going on, what should we be asking?
Then you can hone in on keyword search and say, okay so now I need to find all these things, which documents contain these particular things because there are eleven. That is already there, it’s something that we can start utilizing in case assessment that has not been utilized until now.
Graph Databases: Giving a Visual Sense
You mentioned graph databases. That’s something that’s not in wide use yet, I think, in eDiscovery circles. What are you talking about there?
Databases in general have evolved also to create a very broad, unique niche database-per-use case. We have value data paired databases, we have relational databases, obviously, we have time series databases. Graph databases basically represent relationships. Today, using natural language processing, you can actually extract and identify entities, for example quantities, people, organizations, locations, times, dates, and so on. These are specific entities of certain types; you can identify them in text and extract them. You then throw them into the graph database. I’m going to oversimplify this, but you basically say, okay this document talks about things that happen in this time, mentioning these dollar quantities, these changes in company value, these people that were talking or involved, and so on and so forth. You basically create like a map; this document talks about these things, it’s not the whole story, it’s the main points in the story or the main entities in the story.
Some of these entities exist in many other documents. The graph database creates a relationship map that shows not only which documents relate to which others and how, but also how much. So, how much are two documents related, given a particular person? Are they very much focused on that person? Both of them, or an event? Or an entity that these people created, for example a commercial entity?
It gives you a good visual sense and as visual thinkers, many of us can reutilize that in order to move between these two worlds. The world of intelligence search, asking questions and getting answers and saying well this answer is interesting, what else can you tell me about this answer. Well, your answer also relates to things that happen around those dates, by these people. Say, well that’s interesting. What else? What other answers to my question do you have that relates to this particular person? And it immediately shows up. So, you move between these two worlds of intelligence search: machine learning, AI powered search and the world of graphing, of… it’s almost like visual faceting.
Learning More Upfront: A Discussion with Your Documents
This brings to mind, when I was in law school, we had of course trial advocacy classes, I was active in the legal aid clinic as well, and in both those programs we got instruction on a number of the practical aspects of being a lawyer, working on litigation. Part of it was, how do you take a deposition effectively? How do you question a witness at a deposition, a hearing trial, wherever it might be? One of the models was a T model. You put the main topics that you think you want to cover along the top and then under those main topics, you have subtopics you might want to dig into more deeply. One of the challenges, of course, as you’re actually interrogating a witness at a deposition for example is that the answers they give you might not line up with what you were looking for. It sounds as if what you are talking about here, in many ways takes that same T model, combines it with those series of questions that lawyers historically need to ask – the who, what, when, where, why and how questions – and gives them a set of tools that let them do something with the data that’s much more akin to what historically we have been trained to do with witnesses. So that we can try to ferret out, find and ferret and dig out the story, through posing a series of questions, evaluating the responses, following up with further questions. It sounds like that’s where this can be taking us.
Yeah, that’s a great example, because what you describe this T model would be equated in the computer world to a rule-based system, where you go, okay I’m going to ask this and if this comes true, then I’m going to do this and if this comes false, or I get an answer of this particular type, I’m going to ask the following question. The problem is that we are constantly confronted with new information and sometimes surprising information, things we didn’t know and that leads us to places that we did not plan within our rules.
Moving to a world that is more flexible, machine learning essentially creates a model of so many possibilities, sometimes millions and millions and millions of possibilities, that we as humans cannot really create. We cannot create our own brain, it kind of evolved on its own. Even though we collect a lot of data, we feed the machine learning training process, the model building, but we are not responsible for the outcome. The outcome is a very complex neural network that can actually help us lead no matter where the case goes.
The outcome is a very complex neural network that can actually help us lead no matter where the case goes.
This is exactly my point before, when you’re learning something, like learning a case and you want to be more responsive, no pun intended, to what’s happening in the case as it evolves and the people you’re talking to and the information that you want to share with other parties, it is very helpful that you have an open ended option and you can ask any question and get to places where you may have not expected to get to and be able to still ask more questions that take you even farther.
One of the cases I worked on along the way, was one that was about the contract nobody could find in part, and no one ever found it. We had a four month long trial and still no one ever found the contract. But it was also about the efficacy of a component part in a product. For that, we had gathered up manufacturing data, customer complaint data, component part information, lots of different data sources, threw them all into one large hopper, and through a very challenging and, by today’s standards, primitive process, tried to identify the data that fit with the other side’s story and then tried to identify the outliers, the pieces that did not match at all. It sounds as if what you’re talking about is a much more sophisticated way of accomplishing those types of objectives: Find the story in the data, figure out whether the story you think is your story actually matches what’s out there, and see what better story can be suggested to you from the data. Am I tracking correctly here?
Yeah, that’s an interesting point of view. One thing I want to point out is that while you end up with many more options and more flexibility in learning what’s going on, the interaction or the process is actually not more complex, it’s actually simplified. You can think of it as sitting with a bunch of documents and reviewing them one by one and slowly accumulating knowledge. You go back and reevaluate what you’ve seen and then move forward and learn more and then go back. Instead of the process of going back and forward, you can learn a lot more right up front as if that pile of documents, that electronic pile, is a person or group of people that were actually involved in the case. And you get a chance to interview them, interrogate them, ask them questions. Enabling a computer to understand language – understand what’s in the text, maybe not deep meanings behind what people communicated but certainly what’s plainly in the text, and be able to ask questions, leading questions that will take you to different places – actually can accelerate that process. It’s a very complex technology, I want to make it very clear, but it makes the process uber simplified because it’s like talking to someone and asking them, hey what happened here.
The Potential is Phenomenal
Enormous potential, obviously. How close is anyone to actually delivering that for use in eDiscovery, in actual investigations and lawsuits today?
Well, that’s the interesting thing. These technologies, some of them are newer than others, but I have not seen many companies at all or corporate legal teams or eDiscovery consultants adopting them as a broad strategy yet. I’ve seen a few companies showing interest, but this is really a kind of a blue ocean opportunity here to transform, not change but transform, the fundamentals of the eDiscovery process by doing a lot more up front. As you know with any process, when you are able to accomplish more up front, like software development, if I can take that as an example. If you do the work better upfront, then you identify potential problems upfront, then you spend much less time and money in the expensive process of debugging later or even worse solving customer problems later. The earlier you discover things and understand what’s going on, the faster you can actually go to market and be successful. Take it to eDiscovery, the easier and faster and certainly more cost efficient you can be in getting to share information, make decisions about the case… you know, one of the key things that I always think about, is you’re making big decisions up front and based on very little information, go to trial or settle. Making big decisions like this, you really want to know what you’re looking at. It takes either a great deal of effort up front or you are working with very limited information on making those decisions. The opportunity here is huge, it’s to be able to make decisions based on a lot more information up front and I don’t see a lot of companies out there… maybe after this talk we’ll have a surge… but I don’t see yet, a lot of companies bringing in a lot of AI and I think in eDiscovery, the potential is just phenomenal.
Where to Start
Udi, you have given us a lot of food for thought here. Before we bring this to a conclusion, any final thoughts for the folks with us today?
Well, one of the things that happens to me a lot is I have a conversation with executives in different industries about AI and I see the emotional rollercoaster that they’re going through throughout that conversation. First, they are a little bit intimidated, then they hear all the potential and they get really excited and then all of sudden they pull back and say, but where do I start.
With my final remarks I want to focus on where to start. AI is not an amorphic goal out there, it is something that today is extremely attainable.
AI is not an amorphic goal out there, it is something that today is extremely attainable.
Thank you very much, Udi. Udi Hershkovich is a go-to-market manager at Amazon Web Services, where, as you can tell from our discussion, he focuses on artificial intelligence and machine learning.
I am George Socha. This has been eDiscovery Leaders Live, hosted by ACEDS and sponsored by Reveal. Thanks all for joining us today. Please join us again next week when we talk with Ron Best eDiscovery counsel and director of litigation systems Munger, Tolles & Olson.
Thank you all very much and thanks, Udi!