[00:00:05] Speaker 04: We have one case that to hear oral arguments on this afternoon. [00:00:11] Speaker 04: And that's pure predictive ink versus H2O AI ink. [00:00:17] Speaker 04: And this is 2017-25-44. [00:00:19] Speaker 04: Yes. [00:00:20] Speaker 04: And Mr. Clegg? [00:00:30] Speaker 04: Yes. [00:00:30] Speaker 04: Is that correct? [00:00:31] Speaker 02: That's correct, sir. [00:00:32] Speaker 04: Have you reserved time for rebuttal? [00:00:35] Speaker 02: Three minutes we've reserved. [00:00:37] Speaker 04: Three minutes? [00:00:38] Speaker 04: Yeah. [00:00:38] Speaker 04: Okay. [00:00:41] Speaker 04: We're ready when you are. [00:00:46] Speaker 03: May it please the court? [00:00:49] Speaker 05: Mr. Craig, you argue that your predictive ensembles are not mathematical formulations, I'm quoting you, Blueberry. [00:00:56] Speaker 05: Why aren't they? [00:00:58] Speaker 03: Well, and that's a great question. [00:01:04] Speaker 03: So a mathematical formula is essentially like, let's say, a function where you have a plus bx equals y or something like that, right? [00:01:13] Speaker 03: So that'd be just sort of a pure mathematical formula. [00:01:16] Speaker 03: And so we're not claiming that you could [00:01:19] Speaker 03: just simply claim math, okay? [00:01:21] Speaker 03: But when you use math, or you use an application, like... You use an algorithm to generate another algorithm, which is what you're doing. [00:01:32] Speaker 03: Yeah, but let's say for example, so the application of math in and of itself is not necessarily unpatentable, right? [00:01:39] Speaker 03: But in this case, we're using something that generates learned functions. [00:01:45] Speaker 03: But I think part of the confusion comes here is in the definition of a learned function and what it means in the context of the specification. [00:01:53] Speaker 03: So if we were to go to, for example, [00:01:57] Speaker 03: And this was actually one of the things that in our opposition brief that we presented to the district court, the definition of the learned function. [00:02:07] Speaker 03: And that would be, okay, let me pull that. [00:02:11] Speaker 03: So, a learned function, and this is, [00:02:22] Speaker 03: I want to give you the right citation. [00:02:25] Speaker 03: I think it's on page seven of the opposition brief, but as- I'm sorry. [00:02:30] Speaker 03: What's the number on the record? [00:02:32] Speaker 03: And that would be joint appendix 157, lines 12 through 15. [00:02:38] Speaker 03: It's also in the patent, joint appendix 27, column 8, 50 to 53. [00:02:54] Speaker 03: When you're ready, I can proceed. [00:02:56] Speaker 01: Do you think that that's definitional? [00:03:05] Speaker 03: Yeah, so it is in the sense that a learned function is computer-reliable code. [00:03:12] Speaker 03: And then if you look at it in the context of the claim, [00:03:16] Speaker 03: what's going on here is that you're generating these learned functions and these learned functions have a component of metadata to them because you're arranging them in a way that you can direct data to the different [00:03:30] Speaker 03: learned functions. [00:03:32] Speaker 01: So you generate... If you generated a regression equation, would that be a learned function from data? [00:03:40] Speaker 03: You know, you do the usual... Right, so as it's used here in the claims... Right, that's a great question. [00:03:45] Speaker 03: So as it's used here in the claims, the regression, like let's say a Bayesian classifier or something like that, or like you're saying, a regression formula. [00:03:57] Speaker 03: That by itself doesn't work within the context of the claims. [00:04:03] Speaker 03: So if you look at figure five of the patent, which is on page eight of the ophthalics opening brief, there's one page, you can look at it, and that's [00:04:25] Speaker 03: And what you'll see, this is a predictive ensemble and the learned functions. [00:04:29] Speaker 03: So if you look here, you can see these various types of algorithms. [00:04:34] Speaker 03: And of course, an algorithm. [00:04:36] Speaker 03: Where are you, sir? [00:04:37] Speaker 03: I apologize. [00:04:38] Speaker 03: Page eight of the opening, the blue brief, page eight. [00:04:43] Speaker 01: This is figure five of the patent. [00:04:45] Speaker 01: Sorry, what's that? [00:04:46] Speaker 01: This is figure five of the patent. [00:04:48] Speaker 01: Is that what we're looking at? [00:04:49] Speaker 03: That's correct, yes. [00:04:53] Speaker 03: Now, if you look at each of these, alongside each of them, it'll say features A through F, or feature N through S. And then down below, you're going to see where it explains that the different learn functions include different types of decision trees or things like this, but they're configured to receive a subset of data. [00:05:14] Speaker 03: So the learn functions are identified using metadata. [00:05:19] Speaker 03: So you'll have this computer readable code that takes the [00:05:23] Speaker 03: metadata, and the metadata dictates which set, which type of data it's going to get. [00:05:28] Speaker 03: It may be a column of data, it could be some sort of a group of data, different, or a class of data. [00:05:35] Speaker 03: That's going to be dictated by metadata that gets directed to a particular, it could be an algorithm of some sort, and then that gets, and so the code, we've got this computer readable code, [00:05:50] Speaker 03: that ties this metadata to an algorithm. [00:05:54] Speaker 05: Claim 14 is your representative claim, yes? [00:05:57] Speaker 03: Well, that's what the defendants argued down at the district court level. [00:06:06] Speaker 03: So is it or isn't it? [00:06:08] Speaker 03: Well, I'd say not necessarily. [00:06:09] Speaker 03: I think claim one's more representative. [00:06:11] Speaker 03: But I mean, the problem with just saying that it's representative is they made arguments that this is representative. [00:06:17] Speaker 03: And then they started to claim that it was missing all the stuff that was actually found in the other claims that were sort of computer related. [00:06:23] Speaker 03: I mean, if you look at claim 14. [00:06:26] Speaker 05: To me, it appears to represent the running of data through an allied rhythm to create a new one. [00:06:34] Speaker 03: Well, except for that, and that's a good point. [00:06:36] Speaker 03: So except for the data is not, so the generation of the, well, not of algorithms, because learned function, as I mentioned, we don't look at that as an algorithm, just solely an algorithm, right? [00:06:52] Speaker 03: But if you are generating- Why isn't it an algorithm? [00:06:57] Speaker 03: Well, because it also, you're also including these metadata ties, right? [00:07:03] Speaker 03: It's operated by an algorithm, right? [00:07:05] Speaker 03: And it may have various levels of algorithms that are tied to it, okay? [00:07:10] Speaker 03: But then if you were to, when you're generating these learned... So you're creating an algorithm and adding other data? [00:07:20] Speaker 03: Metadata, yeah. [00:07:21] Speaker 03: But metadata is referential data, right? [00:07:23] Speaker 03: You're referencing data using data in order to know which predictive information you want to process, okay? [00:07:32] Speaker 01: Now, the other thing that is... Could you give one concrete example of that? [00:07:39] Speaker 03: Sorry. [00:07:39] Speaker 01: Metadata referencing other data, the latter to be... [00:07:43] Speaker 01: Yeah, so... Just give one concrete example. [00:07:46] Speaker 01: Abstraction is a problem in this case. [00:07:48] Speaker 03: Okay, so let's say there's a bunch of information. [00:07:52] Speaker 03: You have information regarding zip codes, information regarding marital status, and information regarding something like death rates, okay? [00:08:04] Speaker 03: And so different types of [00:08:10] Speaker 03: predictive modeling or artificial intelligence is going to be more adept for different types of information, for getting you results. [00:08:27] Speaker 03: So in this way, what we're claiming here is you pseudorandomly generate [00:08:36] Speaker 03: these learned functions. [00:08:38] Speaker 03: And so the great part about pseudorandomly doing it without prior knowledge is you start to remove a bias. [00:08:44] Speaker 03: Assumptions about the different types of artificial intelligence modeling, OK? [00:08:49] Speaker 03: So there's artificial intelligence modeling. [00:08:51] Speaker 03: There's a lot of some of these, like, daisies and classes. [00:08:54] Speaker 01: So pseudorandomly generate, use a bunch of zip code, marital status, and death rate data to begin to develop some kind of guess about correlations. [00:09:03] Speaker 03: No, so there's a little bit different here. [00:09:06] Speaker 03: I think there's a little bit of confusion at the district court. [00:09:08] Speaker 03: So the data that you're using, the generation is pseudo-random in that you're really just producing a bunch of these various types and then you are running it against test data, but in the prior art, [00:09:26] Speaker 03: you had, there are these known artificial intelligence models that they use for computer, you know, what they use on computers to generate predictive, you know, outcomes. [00:09:38] Speaker 03: So, but those carried assumptions. [00:09:41] Speaker 03: There was sort of these kinds. [00:09:43] Speaker 03: There was an assumption that this could do this kind of a result. [00:09:46] Speaker 03: And so that's where your experts would get in and get involved with having prior knowledge and say, why don't we try some of these? [00:09:52] Speaker 03: Why don't we try some of these? [00:09:53] Speaker 03: By removing [00:09:55] Speaker 03: that expert, you remove the assumptions and you get a result from what you're producing. [00:10:02] Speaker 03: So you're producing all these different, the generation of them isn't necessarily by filtering data through, you don't have something that says let's get these kinds of algorithms, let's get these algorithms because they have assumptions based on them because of the data. [00:10:19] Speaker 03: But once you generate a bunch of them, you can analyze it based on [00:10:24] Speaker 03: the test data and then you don't necessarily get the fastest result out of these or the best result by each individual one. [00:10:35] Speaker 03: What happens is you start to combine them and you put them into an ensemble and together you may end up getting a bunch of slow learned functions, but when you combine them together you get a better result, a better predictive result. [00:10:50] Speaker 05: Does that make sense? [00:10:51] Speaker 05: the citation to Cognacore v. Nintendo, which is a 2017 case. [00:10:57] Speaker 05: It's in the red brief, and yet you didn't mention it in your reply brief. [00:11:04] Speaker 05: Let me get my glasses. [00:11:05] Speaker 05: I apologize. [00:11:06] Speaker 05: OK. [00:11:09] Speaker 05: It holds that a process that starts with data, adds an algorithm, and ends with a new form of data is directed to an abstract idea. [00:11:18] Speaker 03: Well, because the claims aren't getting us to a new form of data. [00:11:24] Speaker 03: What we're doing is we're creating an arrangement of learned functions. [00:11:29] Speaker 03: Now that learned functions will come up with data. [00:11:32] Speaker 03: But that's something else, right? [00:11:35] Speaker 03: The claim is actually towards the arrangement of these and how we arrange them. [00:11:39] Speaker 05: When I asked you about claim 14, you said, well, it's not just an algorithm, there's metadata. [00:11:47] Speaker 05: That's data, yes? [00:11:49] Speaker 03: Yes, but that data itself is an arrangement. [00:11:54] Speaker 03: Let's go to Enfish for example. [00:11:57] Speaker 03: So in Enfish you had a single column data table that was self-referential. [00:12:01] Speaker 03: We all know this case. [00:12:03] Speaker 03: It's one of the more famous cases out there for us right now. [00:12:09] Speaker 03: The, there was a relationship defined by data, right, within a data table there, that made it patent eligible. [00:12:19] Speaker 03: In this case, there's a relationship. [00:12:22] Speaker 03: Where do you show that? [00:12:25] Speaker 03: Well, both in the specification and in the claims. [00:12:27] Speaker 03: Let me pull this out. [00:12:31] Speaker 03: So the, [00:12:39] Speaker 03: So we talk about in the last element, are you wanting me to go to claim 14 then? [00:12:45] Speaker 03: Worry? [00:12:48] Speaker 03: You said claim 1. [00:12:49] Speaker 03: Yeah. [00:12:49] Speaker 03: On those ones, they're very similar. [00:12:51] Speaker 03: They're slightly... Let me see if those ones are... I know there's a little bit of difference between a couple of claims in the last element. [00:12:58] Speaker 03: Like claim 17, slightly different. [00:13:01] Speaker 03: But we can go to claim 14 or claim 1. [00:13:05] Speaker 03: I think those ones are fairly... Representative? [00:13:10] Speaker 03: Well, we're avoiding that word, but I think there are some distinctions between them a little bit. [00:13:15] Speaker 03: But for the purposes of this, let's go to claim one. [00:13:21] Speaker 03: But basically, at the last part of claim one, [00:13:26] Speaker 03: You see that multiple learn functions are selected and combined based on evaluation metadata. [00:13:31] Speaker 03: So that combination, that arrangement, is going to be based on this metadata. [00:13:37] Speaker 03: Where are you? [00:13:37] Speaker 03: Line what? [00:13:38] Speaker 03: Sorry. [00:13:39] Speaker 03: One, two, three lines into the paragraph. [00:13:43] Speaker 03: Actually, I probably should open the actual... I'll open the actual patent so we can be reading patent lines. [00:13:48] Speaker 03: That'll make it easier. [00:13:53] Speaker 03: So if we are [00:14:07] Speaker 03: This is going to be, I have it in Joint Appendix 35, and it's going to be... Yeah, Column 23. [00:14:15] Speaker 03: Column 23, and I'm coming down to... Okay, so Line 11, I guess it would be, the multiple learn functions selected and combined. [00:14:31] Speaker 03: How does it do that? [00:14:33] Speaker 03: How does it do that? [00:14:35] Speaker 03: Yeah. [00:14:36] Speaker 03: So it does that by after it combines and recombines them and finds. [00:14:44] Speaker 03: Where does it say that? [00:14:45] Speaker 03: That'd be in the specification. [00:14:47] Speaker 03: But the claim itself is referencing it. [00:14:51] Speaker 03: Well, it also actually says here based on the evaluation metadata. [00:14:54] Speaker 03: So. [00:14:55] Speaker 04: Mr. Clegg, you're well into your rebuttal time. [00:14:57] Speaker 04: Do you want to stop now? [00:14:59] Speaker 04: I'll restore three minutes of rebuttal time. [00:15:02] Speaker 03: You know, did you have any questions or would you like me to? [00:15:06] Speaker 04: Well, you can continue on and use up all your rebuttal time if you want or you can stop here. [00:15:11] Speaker 03: Let's stop here so we have some time for rebuttal. [00:15:13] Speaker 03: Okay, let's do that. [00:15:14] Speaker 03: I'd appreciate that. [00:15:16] Speaker 03: Thank you. [00:15:26] Speaker 04: Councillor Bostick? [00:15:28] Speaker 04: Bostwick? [00:15:29] Speaker 00: Bostwick. [00:15:30] Speaker 00: Thank you, Your Honor. [00:15:31] Speaker 00: May it please the Court? [00:15:33] Speaker 00: I'd like to start by clarifying some of the terminology that's at issue. [00:15:37] Speaker 00: I think the introductory argument has- This may take most of your argument. [00:15:41] Speaker 00: It quite well could. [00:15:43] Speaker 00: But I think there's a few terms that are particularly important and that were discussed so far. [00:15:48] Speaker 00: And I want to start with a learned function, because I think I heard a misdescription of that. [00:15:54] Speaker 00: That is defined in the patent. [00:15:56] Speaker 00: This is at Appendix 27. [00:15:57] Speaker 00: It's at Column 8. [00:16:00] Speaker 00: And it's at the bottom. [00:16:02] Speaker 00: It starts at line 50, if that's helpful. [00:16:08] Speaker 01: This is the pair of sentences Mr. Clay pointed to. [00:16:11] Speaker 00: So it's above that. [00:16:12] Speaker 00: This is a learned function. [00:16:15] Speaker 00: Oh, I'm sorry. [00:16:21] Speaker 00: It is those sentences, but I want to focus on a different aspect of it. [00:16:24] Speaker 00: Mr. Clike focused on the fact that it says it's computer readable code. [00:16:27] Speaker 00: But what's important here is that a learned function is something that accepts an input and provides a result. [00:16:34] Speaker 00: That is all that the learned function is. [00:16:36] Speaker 00: It can be anything. [00:16:37] Speaker 00: It can be, for example, Judge Toronto, as you suggested, a regression that's clear from figure five. [00:16:43] Speaker 00: It indicates a regression as one of the learned functions. [00:16:46] Speaker 00: But it's well broader than that. [00:16:49] Speaker 00: It is anything that accepts data as an input and provides any kind of result. [00:16:53] Speaker 00: And the patent here at column 8 goes on to explain that the result can be a classification, a confidence metric, an inferred function, a regression function, an answer, a prediction, a recognized pattern, a rule, a recommendation, or the like. [00:17:07] Speaker 00: It's extremely broad. [00:17:08] Speaker 00: But I also want to clarify that the metadata is not part of the learned function. [00:17:14] Speaker 00: If we look at the claims, and I think we may now have agreement on the fact that claim 14 is representative, but it's actually not meaningfully different from claim one. [00:17:28] Speaker 00: I think you have agreement on that. [00:17:30] Speaker 00: Yes. [00:17:31] Speaker 00: So if we look at claim one, and this is at appendix 35 at the top of column 23, talks about the predictive ensemble. [00:17:40] Speaker 00: And the predictive ensemble is made up of two things. [00:17:43] Speaker 00: One is the subset of learned functions that you have selected in some unspecified way based on some unspecified form of evaluation. [00:17:53] Speaker 00: And then the rule set about how you're going to apply those and the rule set is synthesized from the metadata and the functions are selected based on the metadata but they don't contain the metadata itself. [00:18:06] Speaker 00: So what you have here is exactly [00:18:09] Speaker 00: what the district court found and correctly determined to be the patent ineligible abstract concept of testing and refining mathematical algorithms. [00:18:18] Speaker 00: That's a quote from Appendix 11 in the district court's opinion. [00:18:21] Speaker 00: It's nothing more than manipulating and organizing data and math. [00:18:25] Speaker 00: To be clear, we are not claiming that these claims are directed to a mathematical formula. [00:18:29] Speaker 00: to any particular mathematical formula or any set thereof. [00:18:32] Speaker 00: The claims are directed to simply this idea that certain functions will be better at predicting something about certain types of data, certain subsets of data, certain features within the data. [00:18:46] Speaker 00: And so you [00:18:49] Speaker 00: pseudorandomly generate a plurality of functions, which again can be anything, and then you evaluate them in some way to determine something about those functions and particularly which data they will pair with. [00:19:05] Speaker 00: And then you have this predictive ensemble. [00:19:09] Speaker 00: And the predictive ensemble is important. [00:19:11] Speaker 00: The predictive ensemble is defined by the claim language itself, this claim language that I just referenced. [00:19:15] Speaker 00: It is nothing more than the subset of functions you're going to use and the rule set, the information that tells you how you're going to use them. [00:19:23] Speaker 00: There is no structure. [00:19:25] Speaker 00: This is not ENFISH. [00:19:26] Speaker 00: In ENFISH, the claims as construed under Section 112F required [00:19:32] Speaker 00: a particular structure to the database. [00:19:34] Speaker 00: They required, as opposed to the prior art, which had multiple tables in the database and then definitions of the relationships between them, the claims in ENFISH were directed to a self-referential database. [00:19:48] Speaker 00: And so here you only used one table and you had the rows that were used to define what the columns made. [00:19:54] Speaker 00: That's a structure. [00:19:56] Speaker 04: So let's go back. [00:19:57] Speaker 04: What would you say the predictive ensemble is? [00:20:01] Speaker 00: The predictive ensemble, again, it's defined in the Claims Your Honor. [00:20:04] Speaker 00: The predictive ensemble, it comprises the subset of... Where are you? [00:20:08] Speaker 00: I apologize. [00:20:09] Speaker 00: I'm at Appendix 35 at Column 23, and this is in Claim 1. [00:20:16] Speaker 00: And this is the final limitation, which talks about the predictive. [00:20:21] Speaker 04: Which line? [00:20:22] Speaker 04: Are you around line seven or so? [00:20:23] Speaker 00: I am at line seven, yes. [00:20:25] Speaker 00: This is the predictive compiler module. [00:20:27] Speaker 00: And the predictive compiler module is what forms the predictive ensemble. [00:20:31] Speaker 00: And then the claim goes on. [00:20:33] Speaker 00: The predictive ensemble comprising a subset of multiple learned functions from the plurality of learned functions [00:20:40] Speaker 00: the multiple learned functions selected and combined based on the evaluation metadata for the plurality of learned functions, and the predictive ensemble comprising a rule set synthesized from the evaluation metadata to direct data through the multiple learned functions. [00:20:56] Speaker 00: So again, this shows how the predictive ensemble really is the abstract idea itself. [00:21:03] Speaker 00: It is this notion that you will have some functions [00:21:06] Speaker 00: They may be combined in different ways or extended in different ways, but some set of functions and then information that comes from this metadata, which can be anything. [00:21:15] Speaker 00: It just means data about data. [00:21:16] Speaker 04: So I didn't read the predictive ensemble as being any type of structure. [00:21:21] Speaker 04: I couldn't find that it's a separate structure. [00:21:26] Speaker 04: Is that correct? [00:21:27] Speaker 00: That's correct. [00:21:28] Speaker 00: Nothing in this pattern describes it in any structural way. [00:21:30] Speaker 04: It seemed to me it was a way of thinking. [00:21:32] Speaker 00: Absolutely, and this certainly Claim 14, but in our opinion, most of the claims can be performed entirely in the human mind or with pen and paper, or at the very least, the equivalent of human mental activity, which this court has said is not patent-eligible. [00:21:51] Speaker 00: So yes, there's no structure to the predictive ensemble. [00:21:54] Speaker 00: And I want to talk about a term that is used repeatedly in pure predictives brief, and that is metadata-structured environment. [00:22:03] Speaker 00: which does not appear in the patent, but is their characterization of what the predictive ensemble is. [00:22:08] Speaker 00: And if you unpack that term, I think it helps explain why this sounds like it might be like MFISH, but really is not at all. [00:22:17] Speaker 00: A metadata structured environment is, as you suggest Judge Raina, not any specific structure. [00:22:23] Speaker 00: All it's telling you is that you have [00:22:25] Speaker 00: information that is organized by using metadata. [00:22:30] Speaker 00: It doesn't tell you how it's organized. [00:22:31] Speaker 00: It doesn't tell you what kind of metadata. [00:22:33] Speaker 00: It doesn't tell you how you actually make that happen. [00:22:36] Speaker 00: And again, metadata is a broad term that simply means information about information. [00:22:41] Speaker 00: I want to also talk about the predictive voting system. [00:22:46] Speaker 00: I think there was also a little bit of confusion on this. [00:22:49] Speaker 00: This pure predictive argues to this court, they did not argue below, that the [00:22:54] Speaker 00: that the claims are patent eligible at Alice Step 2 because they differ from the prior art Richter reference, which described using a predictive voting system. [00:23:07] Speaker 00: I first want to clarify, the predictive voting system is not what is described in the 446 patent as the prior art. [00:23:15] Speaker 00: If you look at appendix, I believe it's 27. [00:23:20] Speaker 00: Appendix 27, and this is column 7, and it's lines 2 through 16. [00:23:30] Speaker 00: This is the description of the prior art that used a data scientist and it describes the data scientist, the human, who would, and I'm at line eight, data scientists typically must determine the optimal class of learning machines that would be the most applicable for a given data set and rigorously test the selected hypothesis by first fine-tuning the learning machine parameters [00:23:54] Speaker 00: and second, by evaluating results fed by trained data. [00:23:57] Speaker 00: That's not describing a predictive voting system. [00:24:00] Speaker 00: That is describing a human implemented version of exactly what these patent claims cover and underscores the fact that these claims are simply automating human activity, which again, [00:24:11] Speaker 00: court has repeatedly held does not make something patent eligible. [00:24:16] Speaker 00: The predictive voting is a different concept. [00:24:19] Speaker 00: Instead of running different subsets of data through different functions, you run all the data through all of the functions, and then each result gets a vote. [00:24:31] Speaker 00: And so, for example, if [00:24:33] Speaker 00: If you have 10 functions and 6 out of 10 say the answer is X and 4 out of 10 say the answer is Y, then your answer is X. The fact that the prior art or at least one prior art reference used that form of predictive analytics and this patent describes a different form of predictive analytics, that's not an inventive concept. [00:24:56] Speaker 00: both abstract ideas. [00:24:58] Speaker 00: They may, the Richter reference may have claimed predictive voting in a patent-eligible way. [00:25:04] Speaker 00: We don't have those claims before us, but the mere fact that this might be a novel way of doing predictive analytics, which we don't concede, but even if it is, novelty alone does not get you Section 101 eligibility. [00:25:21] Speaker 00: I also want to just make sure to clarify there's a reference to artificial intelligence. [00:25:26] Speaker 00: This is not a patent about artificial intelligence. [00:25:29] Speaker 00: This is not a patent about machine learning. [00:25:31] Speaker 00: This is a patent that is, at most, about predictive analytics. [00:25:34] Speaker 00: That term, in fact, appears only in the preambles to the claims, but we can agree that it's about predictive analytics. [00:25:40] Speaker 00: And the patent here, again, tells you what predictive analytics is. [00:25:44] Speaker 00: This is at appendix 26. [00:25:46] Speaker 00: Column 6, line 25. [00:25:50] Speaker 00: Predictive analytics is the study of past performance or patterns found in historical and transactional data to identify behavior and trends in future events. [00:26:02] Speaker 00: It's simply taking past behavior and using that to predict the future. [00:26:07] Speaker 00: The patent goes on to say, this may be accomplished using a variety of statistical techniques, including modeling, machine learning, data mining, or the like. [00:26:16] Speaker 00: And then the following paragraphs discuss, among other things, that a category of learned functions may be classifications, which are a form of artificial intelligence. [00:26:27] Speaker 00: And so this patent may rely on machine learning or artificial intelligence to perform predictive analytics. [00:26:34] Speaker 00: That is not required by any of the claims. [00:26:36] Speaker 00: There's no allegation that it's required by any of the claims. [00:26:39] Speaker 00: But this is not a patent directed to artificial intelligence. [00:26:48] Speaker 00: If the court has further questions, we ask that the district court be affirmed. [00:26:51] Speaker 04: Thank you. [00:26:55] Speaker 04: Councillor Clegg, we'll put you back to three minutes. [00:26:58] Speaker 03: Thank you, Your Honor. [00:27:07] Speaker 03: I'd just like to address quickly, if it pleases the court, a couple of issues that were raised, and then a couple that I didn't get a chance to address. [00:27:18] Speaker 03: One of the issues would be H2O alleges that this is just automating human activity. [00:27:31] Speaker 03: And so I want to reference back to the specification and [00:27:39] Speaker 03: This is Appendix 26 and Appendix 27. [00:27:43] Speaker 03: Columns and lines. [00:27:46] Speaker 03: Columns 6. [00:27:49] Speaker 03: Right. [00:27:49] Speaker 03: So on Column 6, there's a bunch of different types. [00:27:58] Speaker 03: Yeah, I'll start with line 66. [00:28:02] Speaker 03: But up above 66, it just references a whole bunch of different types of modeling, predictive analytics modeling. [00:28:08] Speaker 03: And then right there, it says, each of these forms of modeling make assumptions about the data set model. [00:28:13] Speaker 03: The given data, however, some models are more accurate than others, and none of the models are ideal. [00:28:21] Speaker 03: And that's going on to column seven. [00:28:23] Speaker 03: And then coming down on column seven at about a line [00:28:28] Speaker 03: Eight, it says a data scientist typically must determine the optimal class of learning machines that would be most applicable for a given data set. [00:28:37] Speaker 03: So again, we're talking about there's these assumptions that come with a data scientist, and it kind of gets in the way of you get a more effective predictive ensemble, this predictive analytics tool, because you're removing the bias, you're removing, and there's no prior knowledge, and you're doing pseudorandom generation. [00:28:56] Speaker 03: There's no pseudorandom generation, and there's no omitting the knowledge of an expert when you're doing it. [00:29:04] Speaker 03: So you're not really doing the same thing in the same way that an expert would do. [00:29:09] Speaker 04: Would you say it's doing it faster? [00:29:11] Speaker 04: It's pretty much doing the same thing but faster. [00:29:13] Speaker 03: No, it does it faster, but the point's not that it's doing it faster. [00:29:18] Speaker 03: The point is that it's doing it differently. [00:29:19] Speaker 03: The point is you're removing the bias, the domain bias. [00:29:22] Speaker 03: You're removing the expert's presumptions about [00:29:25] Speaker 03: a group of models because you're pseudo randomly just generating different ones. [00:29:30] Speaker 04: So you're removing the data scientist. [00:29:32] Speaker 03: You're removing the data scientist. [00:29:34] Speaker 03: And that removes these biases. [00:29:36] Speaker 03: How do you randomize it? [00:29:38] Speaker 03: The pseudo random generation. [00:29:42] Speaker 03: Well, there's, you know, the question is you have a software program that generates, you know, and pseudo-randomly. [00:29:53] Speaker 05: Data scientists use a software program to generate random data? [00:29:59] Speaker 03: Well, the fact, well, [00:30:03] Speaker 03: prior to the date of the application, that you wouldn't have necessarily seen that as a routine and conventional issue. [00:30:08] Speaker 03: And I think that's what we're getting at is that... Well, my question was, can a data scientist do that? [00:30:14] Speaker 03: Could they... Well, using... Yeah, certainly anybody could do it using the invention, sure, because... No, no. [00:30:21] Speaker 05: Could a data scientist use a program to generate random data in order to randomize it? [00:30:31] Speaker 03: Well, we were talking about random generation of data versus learned functions, right? [00:30:35] Speaker 03: So what we're doing is that they could certainly randomly generate data. [00:30:40] Speaker 03: This is randomly generating learned functions. [00:30:42] Speaker 03: But that doesn't make a routine and conventional, particularly here where we're doing it in order to create ensembles. [00:30:49] Speaker 03: And that is one of the things that I wanted to address because, and I apologize, I'm out of time. [00:30:55] Speaker 03: May I finish this? [00:30:56] Speaker 04: Why don't you go ahead and conclude? [00:30:58] Speaker 04: I believe we have your arguments. [00:30:59] Speaker 03: OK. [00:31:01] Speaker 03: The main thing I wanted to make, the point that I wanted to make was that there were several issues in this point that go to routine and conventional, that is, they're not routine and conventional. [00:31:12] Speaker 03: And outside of that, you're looking at a 103 issue anyway. [00:31:17] Speaker 03: And then I guess my time's up. [00:31:21] Speaker 03: I had one more point. [00:31:24] Speaker 03: It's about 30 seconds, if I may. [00:31:26] Speaker 03: They pointed out that ENFISH had a particular structure, the self-referential database. [00:31:32] Speaker 03: But one of the things that were pointed out in ENFISH is, and this is addressed on Joint Appendix 161, where we cite this in our opposition brief, [00:31:41] Speaker 03: Specifically, the NFISH court found that the specification teaches the self-return table functions differently than conventional database structures. [00:31:49] Speaker 03: And that while the structural requirements of current databases require a programmer to pre-define a structure and subsequent data entry must conform to that structure, the database of the present invention does not require a programmer to pre-configure a structure to which a user must adapt data entry. [00:32:08] Speaker 03: And so I think that is very applicable here. [00:32:12] Speaker 03: And we'd request that the court reverse. [00:32:15] Speaker 04: Thank you very much.