[00:00:00] Speaker 03: ...versus Erie in damnate company. [00:00:22] Speaker 03: Do you want some water? [00:00:23] Speaker 03: I've got something for Evan. [00:00:25] Speaker 03: Do you want water? [00:00:26] Speaker 03: He's got water. [00:00:26] Speaker 03: I'll just knock it off. [00:00:27] Speaker 03: Coffee? [00:00:27] Speaker 03: Tea? [00:00:28] Speaker 03: Demons. [00:00:32] Speaker 05: They're making way for the next crowd to come in. [00:00:52] Speaker 07: Well, we'll see if you figured out this microphone, Mr. Hart. [00:00:57] Speaker 02: Let's try it. [00:01:03] Speaker 07: Maybe you should bring it down. [00:01:04] Speaker 07: I think if you put it in front of you it works better even though it's a little lower. [00:01:08] Speaker 07: Maybe not. [00:01:11] Speaker 07: That's okay. [00:01:13] Speaker 07: Try it. [00:01:13] Speaker 07: Good morning. [00:01:15] Speaker 02: Good morning. [00:01:17] Speaker 02: May it please the court, my name is Christian Hurt, and I'm here this morning although it's close to the afternoon on behalf of Intellectual Adventures. [00:01:27] Speaker 02: This is Alex Peel, and this report here when he concluded that the patent in this case, the 290 patent, [00:01:35] Speaker 02: failed the Alice test. [00:01:36] Speaker 02: So there are a number of issues in this appeal. [00:01:39] Speaker 02: The primary question is really where does this pattern fit? [00:01:43] Speaker 02: Is this case more like Symantec or is it more like McRoe? [00:01:47] Speaker 02: And we believe that this case is much closer to McRoe and the same outcome, McRoe with the reversal of the district court. [00:01:55] Speaker 07: What about, can you include fear warning in your analysis of what's close and what's farther? [00:02:00] Speaker 07: Because it seems to me it's McRoe versus fear warning. [00:02:04] Speaker 02: Sure, so fair warning, the difference between Monroe and fair warning and where our patent fits in is the rules that were issued in fair warning on fraud detection. [00:02:15] Speaker 02: It was a computerization of criteria that humans did. [00:02:19] Speaker 02: So how long did someone look at a particular medical file for? [00:02:23] Speaker 02: Who was the person? [00:02:25] Speaker 02: When was it accessed? [00:02:27] Speaker 02: In our case, like Monroe and Grove, the criteria are tied to the technology. [00:02:34] Speaker 02: So when they grow, it was a specific rule that was associated as an output. [00:02:39] Speaker 07: Well wait, tell us about the criteria here and why they couldn't be done and weren't done by humans before. [00:02:45] Speaker 02: Sure, so there's two pieces to the claim. [00:02:47] Speaker 02: There's the selective part and there's three selective criteria. [00:02:51] Speaker 02: And then there's what I call the digital signature process of banking. [00:02:56] Speaker 02: The three selected criteria, selecting a file based on whether there is data that exceeds the ethnic file marker, or whether there's a mismatch between the file name and the contents of the file, or where identically sized files exceed the threshold, there's no evidence in the record that humans actually performed... But those are kind of objective inquiries, and that, let me suggest it to you so you can tell me why I'm wrong, but that to me [00:03:24] Speaker 07: suggests that those objective inquiries make it more like fair warning than a grow where what was in place previously was really a subjective analysis. [00:03:36] Speaker 02: and mccrow said yet but now we've computerized it made it objective this is something new and different i think that it now applies here because in the two-night background there is a discussion but before these claims there was a manual review and then the administrator could either go file by file and try to find out is this file an unauthorized file or not [00:04:01] Speaker 02: And that review missed the things that are pat and catches. [00:04:05] Speaker 02: So a great example, a prime example of LIMON, is data that's beyond the end of the file marker. [00:04:12] Speaker 02: If you have an image, if you have a file that says document one PDF, and you do a file by file review, and you open that file, it looks like a normal PDF, because a computer administrator can't see [00:04:26] Speaker 02: data that's past the end of file marker is not going to show up in the PDF. [00:04:32] Speaker 02: Or if it's an image file and the administrator opens the image, they'll see an image, but past the end of data, the end of data file marker, there could be unauthorized file data associated with something besides the image, some illegal pirated software or something of the like. [00:04:50] Speaker 02: That doesn't show up in that manual review. [00:04:52] Speaker 07: Okay, well the computer allows you to do things quicker and better and more thoroughly, but having our cases said, that's not enough. [00:05:04] Speaker 02: The cases have said that when what you are taking is a prior art process, a manual process, and just computerizing it. [00:05:14] Speaker 02: There isn't an analog to, for example, looking for data on the end of the data file market. [00:05:20] Speaker 02: There's no evidence in this record that computer administrators could see that. [00:05:25] Speaker 06: Isn't that the same as saying [00:05:28] Speaker 06: Suppose I assign a clerk a project and I say I don't want anything beyond three pages or I give them all, all my clerks an assignment and I say I don't want anything beyond three pages, I don't want any pictures and I don't want anything addressing policy. [00:05:48] Speaker 06: And then they hand them to me. [00:05:49] Speaker 06: One is four pages. [00:05:51] Speaker 06: I just put that over there. [00:05:53] Speaker 06: One of them has pictures in it. [00:05:54] Speaker 06: I put that over there. [00:05:56] Speaker 06: I look at one that says, Congress said, I put that over there. [00:05:59] Speaker 06: So now I've got it. [00:06:01] Speaker 06: I've sorted them out. [00:06:03] Speaker 06: That's what your patent does. [00:06:05] Speaker 06: And then I make a report. [00:06:07] Speaker 06: I've got one policy. [00:06:09] Speaker 06: Then I report it to Judge Proce. [00:06:12] Speaker 06: That's basically your invention there. [00:06:16] Speaker 02: I don't believe so. [00:06:17] Speaker 02: The reason is that for the data past the end of the final marker, there's no way a human can see that. [00:06:27] Speaker 02: There's also the second part of the claim that's not just... Of course there is. [00:06:33] Speaker 05: They look when they simply go past the code. [00:06:38] Speaker 05: They can do it. [00:06:39] Speaker 05: The machine can do it. [00:06:40] Speaker 05: They can do it. [00:06:41] Speaker 02: But you're not claiming the process of opening or even of seeing [00:06:59] Speaker 06: You're claiming something different. [00:07:01] Speaker 02: That's correct. [00:07:02] Speaker 02: And I was discussing whether a human could perform it. [00:07:06] Speaker 02: There's also a second part of the claim in the digital signature process. [00:07:11] Speaker 02: And discuss the page you discussed in column seven. [00:07:15] Speaker 02: There's a new use of checksums in this pattern. [00:07:20] Speaker 07: So in the end, the file is... Why don't you tell us where we are on column seven? [00:07:23] Speaker 02: Sure. [00:07:24] Speaker 02: It's in column seven. [00:07:27] Speaker 02: Lines 48 through 65, the beginning discussion of figure two states, unlike checksums, they are traditionally used in the computing field. [00:07:42] Speaker 02: The checksums brought herein is not related to the total number of bytes used to generate the number, thus reducing the traditional problem of checksums, namely, [00:07:53] Speaker 02: that similar file lengths are more likely to generate the same checksum than dissimilar file lengths. [00:07:59] Speaker 02: So let me explain how this impacts the class. [00:08:03] Speaker 02: In the end of file example, where I've got an image that goes to the right, I've got some extra data beyond that point. [00:08:10] Speaker 02: If a checksum is generated on that entire file, it may not match a checksum in the unauthorized file list, because the unauthorized file data is just that last part. [00:08:24] Speaker 02: So if you take the checksum of the whole file, it's not going to match the checksum in the database. [00:08:31] Speaker 02: But in this new non-traditional use of checksums, it does a rolling basis throughout the file to make sure that it hits that unauthorized edit again and produces a match, whereas a traditional use of the checksum across the entire file may not have a match. [00:08:49] Speaker 02: Because the unauthorized data, in my case, might be appended to an image, but in another case where it was found, it may have been appended to a different type of file like a PDF. [00:09:02] Speaker 02: And so if you include in the check zone, [00:09:05] Speaker 02: the part of the file that is authorized, it won't perform the match. [00:09:10] Speaker 02: And I think that digital signature process distinguishes this from your honors example in addition to the selection criteria, because in your honors example, Jim Ranham, you were selecting memos that were non-compliant based on criteria you could see, not based on some attributes of the files as matched up [00:09:32] Speaker 02: Why aren't humans capable of reading binary language? [00:09:59] Speaker 02: I think humans can see the zeros and the ones if your computer is able to open them. [00:10:05] Speaker 02: I believe the issue is knowing where the end of the data file marker is. [00:10:09] Speaker 02: It's not some bright line that's going to show up if you were to try to open it with a text pad. [00:10:15] Speaker 02: And there's no analog to, at least indirectly, [00:10:20] Speaker 02: to humans being able to perform that process. [00:10:24] Speaker 02: I mean, taking it to its extreme, humans could perform data compression and encryption, but those types of patterns of support are similar. [00:10:34] Speaker 06: But you're not claiming any of those things. [00:10:36] Speaker 06: You have a system, you have a computer system that selects from a pile of files, it selects those and then it [00:10:47] Speaker 06: It segregates them by aggregate size, content, and a data. [00:10:54] Speaker 06: Then you have a generator that's generated identification values, then a report. [00:11:02] Speaker 06: That's what you've invented, the system of doing that. [00:11:06] Speaker 06: And that system [00:11:09] Speaker 06: Because we don't know how the file is selected, we don't know how you have, how you go about, we don't know the rules for comparing generated information value from one to another. [00:11:22] Speaker 06: All we're left with is selecting files, looking at them based on content. [00:11:31] Speaker 06: How is that not abstract? [00:11:32] Speaker 02: Sure, so imagine macro. [00:11:35] Speaker 06: I mean, that's what I'm getting to. [00:11:37] Speaker 06: There's no rules here. [00:11:40] Speaker 02: So in macro, the court concluded the rule of macro was an output as a function of certain inputs. [00:11:48] Speaker 02: The output was going to be what is the computer's bracketing animation as a function of [00:11:54] Speaker 02: sequence of sounds, and when those sounds happen. [00:11:58] Speaker 02: That was the detail of the rule of the crow. [00:12:01] Speaker 02: Here, the rule is it provides the three selection criteria, so it's not just selecting a file, and then the how, if there is an issue of how... But the crow had rules that were based on mathematical values and generation of the facial expression, [00:12:19] Speaker 06: And matching that up to the configuration of the lips, the mouth. [00:12:26] Speaker 06: It did have rules. [00:12:28] Speaker 02: It did. [00:12:29] Speaker 02: Those were all in the specification of that patent here. [00:12:33] Speaker 06: But it claimed them. [00:12:34] Speaker 06: The claims claimed rules. [00:12:37] Speaker 06: What are the rules that are claimed here? [00:12:40] Speaker 02: Sure. [00:12:40] Speaker 02: So there were the rules of the selection criteria and then the comparing, the generating and comparing characterization steps. [00:12:49] Speaker 02: So it broke the claim. [00:12:51] Speaker 06: It doesn't say criteria. [00:12:53] Speaker 06: It just says selecting a file. [00:12:56] Speaker 06: selecting a farm by determining. [00:13:00] Speaker 06: I don't see criteria, I don't see specific rules. [00:13:04] Speaker 02: The rules in any selection step are does the farming mismatch the contents of the farm? [00:13:13] Speaker 02: Is there data beyond the end of the day to farm? [00:13:17] Speaker 02: Or if there's a virality of the benefits as far as the day to see the threshold? [00:13:20] Speaker 02: And if the claim lacks enough [00:13:22] Speaker 02: How, to one ordinary skill in the art, padding also includes 50 pages of source code for each of those limitations. [00:13:29] Speaker 02: And the court recently, in visual memory, concluded that rule 12 states, where we are at here, that the court must credit that that source code would show one ordinary skill in the art, how to implement each of those functionalities. [00:13:44] Speaker 02: And the rules of the grove say the first set of rules define an output of the mortal waste set stream as for the images, as a function of the phoneme sequence, as a sequence of sounds, and the timing of the sequences. [00:14:02] Speaker 02: Our case has [00:14:03] Speaker 02: And equally detailed, my thing, set of rules for how to select a file and determine it's unauthorized. [00:14:09] Speaker 02: And Mike, for the prior manual, you had subjective components where the administrator would either search for large files and try to determine if they're unauthorized or not, or manually review files. [00:14:24] Speaker 02: That process had limitations because it missed the types of files that this claim [00:14:29] Speaker 02: Yes, thank you. [00:14:37] Speaker 07: We'll reserve the remainder and hear from your colleague. [00:14:40] Speaker 07: Thank you. [00:14:51] Speaker 00: I want to try to talk into your mic. [00:14:58] Speaker 07: Thank you. [00:14:59] Speaker 00: I think Judge Post, you posed to Mr. Hurt the issue of is this case closer to McRoe or is it closer to fair warning. [00:15:13] Speaker 00: I would add to that, closer to their warning, or even the first appeal in this case, where we had a 434 patent, dealt with the database technology, specifically limited to particular XML, computer language, not a language that people speak, [00:15:31] Speaker 00: specifically limited to the use of metadata, something that people can't understand. [00:15:36] Speaker 00: And this court said, in that case, that doesn't matter because the way you're using it is generic, it's abstract. [00:15:45] Speaker 00: The limitation to something that only computers can read or only computers can generate [00:15:50] Speaker 00: does not by itself move this case from one that deals with an abstract concept to one where we're talking about pathos. [00:15:59] Speaker 07: So what more did McRoe have that's missing here and missing in the earlier case? [00:16:04] Speaker 00: Your Honor, I think you already said it, and so I'll just be stating that. [00:16:08] Speaker 00: But I think the key distinction and one of the most important distinctions in McRoe is that Judge Rayna's opinion in that case made it very clear that it was very important that the prior article [00:16:20] Speaker 00: in Mikro, the ways that computers were used to synchronize audio tracks with the visual 3D animation involved a subjective step where the computer animator had to make a subjective judgment about where to draw particular lines in order to line up the facial expression with the audio track. [00:16:44] Speaker 00: And what the invention in McRoe did, what the claims did, using specific rules, was take what was a subjective process and make it an objective process. [00:16:54] Speaker 00: And that was actually an improvement to the computers, an improvement to the way that computers could operate in order to do 3D animation. [00:17:04] Speaker 00: Here, we're not talking about taking something that was a subjective process previously and making it an objective process. [00:17:11] Speaker 00: Remember what the problem of this stadium here is. [00:17:15] Speaker 00: The problem of this stadium here is you are starting from a place where you already know what files there are that are unauthorized. [00:17:24] Speaker 00: That's necessary to the claim. [00:17:26] Speaker 00: You're not finding new files and determining if they have pornography or copyrighted materials in them. [00:17:33] Speaker 00: You already have a list of all the files that you don't want on your system. [00:17:37] Speaker 00: What this claim is about [00:17:39] Speaker 00: is using certain predetermined criteria to run a screen and identify suspicious files that are then going to be looked at more closely in the second half of the play as Mr. Hurd described. [00:17:53] Speaker 00: So you're not moving from a subject to an objective process. [00:17:58] Speaker 00: The process is already objective. [00:18:01] Speaker 06: Why isn't that an improvement in computer technology to be able to identify matter that [00:18:09] Speaker 06: basically comprises malicious code. [00:18:14] Speaker 00: Two responses to that, Your Honor. [00:18:16] Speaker 00: Respectfully, I disagree with the premise that the claim has anything to do with the identification of malicious code. [00:18:24] Speaker 00: These claims are about identifying pornographic material, copyrighted material that shouldn't be on your system, other illicit files, [00:18:33] Speaker 00: But it's not about identifying like a computer bias, for example, which is what I would think of. [00:18:41] Speaker 06: The reason that it's not an improvement to the computer is that... Wouldn't a system that would prevent, say, my kid from getting on and viewing pornographic material on a computer, wouldn't that be an improvement over the use of the technology? [00:18:58] Speaker 06: Especially with respect to my [00:18:59] Speaker 06: enabling my child to become proficient in that technology? [00:19:05] Speaker 00: Your Honor, I think you could hypothesize a claim that created a new technological method for, for example, recognizing a completely new file that the computer's never seen before, has pornography in it, or it should be looked at further. [00:19:24] Speaker 00: That's not what these claims are about. [00:19:26] Speaker 00: What these claims talk about is [00:19:27] Speaker 00: In column 1, line 64, in column 2, line 16 of the panel, sometimes based on 13, I believe, or 33, I'm sorry. [00:19:40] Speaker 00: The background of the invention, the background of the technology says, we know that people who are trying to hide and elicit content in digital files do a few different things. [00:19:51] Speaker 00: One is they'll take a large file and break it up into a number of equally sized smaller files to help escape protection. [00:20:02] Speaker 00: Another thing they'll do is they'll take the illicit content and stick it at the back end of legitimate content to try and hide it. [00:20:10] Speaker 00: And the third thing that they will do is name the file one thing, for example .txt, but add a different type of data in the file. [00:20:23] Speaker 00: And all of the criteria here, all the claims are doing is saying, we know people are doing this. [00:20:28] Speaker 00: Just use those criteria and apply them to identify suspicious clients. [00:20:34] Speaker 00: There's not even in the patent any argument that it's an improvement to the technology or there's any technological insight in the invention. [00:20:44] Speaker 00: I would like to address Judge Raina, the second part of his question, which is what wasn't, I think it goes to what Mr. Horvitz talked about, about perhaps the second half of his claim, this identification of digital value that he's going to use as a basis for comparison to the glitz in the checksum. [00:21:05] Speaker 00: I have two points there. [00:21:06] Speaker 00: One is I think there's a key acknowledgement that they use six to seven of intellectual ventures with fine print. [00:21:13] Speaker 00: And that's where they're discussing what happened during the prosecution of the 290 claims. [00:21:21] Speaker 00: And what happened during the prosecution is that there was a section 102 rejection over a cap to an individual in the department. [00:21:29] Speaker 00: And the way that intellectual investors overcame that rejection and the way that it's described in the reply group was by adding those preselected, those predetermined selection rights to that. [00:21:42] Speaker 00: So everything in the second half of the claim, in terms of generating a check sum, comparing it to a known list, and that if there's a match marked in the file, that is already acknowledged by, I mean, in the reply group to the thing known in the department. [00:21:58] Speaker 00: The second point is, there was some discussion, and I think it was really a new argument that we haven't seen in groups to date, that there's something in particular about the check summit, [00:22:10] Speaker 00: that makes these claims patent eligible. [00:22:14] Speaker 00: The first point on that is, of course, the independent claims don't require checksums whatsoever. [00:22:20] Speaker 00: They just require the generation of an identification value. [00:22:24] Speaker 00: But even if they did require specifically generating a checksum, which is one way to generate identification value, the patent says that column 9, lines 10 to 12, there are numerous possible algorithms that may be utilized to generate a checksum [00:22:39] Speaker 00: with an exemplary algorithm shown in Figure 3. [00:22:42] Speaker 00: In all Figure 3 shows, it's a box that says check and check-sum. [00:22:46] Speaker 00: And at lines 34 and 36 of Power of Nine, it says, it should be appreciated if a person's having an ordinary skill in the art, that many other types of algorithms could be utilized to achieve results specific to certain types of files. [00:23:00] Speaker 00: So there's an acknowledgement in the packet itself that check-summing is well known. [00:23:06] Speaker 00: Now, there is a statement and specification that Mr. Hurt pointed to today for the first time that says we're going to discuss a way to do checks on anything that was not previously done. [00:23:19] Speaker 00: I don't agree with Mr. Hurt's summary of how that works. [00:23:23] Speaker 00: What I think the patent's describing is that you generate, at the same time, [00:23:29] Speaker 00: You generate checksum for a small part at the front of the file and see if you have a match before you generate checksum for a larger part of the file. [00:23:39] Speaker 00: Because if the first 8 bytes or 12 bytes of the file don't match something on your list, then there's no reason to spend the time to checksum the entire file. [00:23:49] Speaker 00: That is arguably the only relevant, excuse me, in claims 5 and 15. [00:23:56] Speaker 00: Intellectual Ventures has never argued in any of those previous district court or in this court that 5 and 15 should be treated differently on independent claims. [00:24:05] Speaker 00: And even if it had, 5 and 15 don't describe anything other than a conventionalist technology, just degenerating into checksums and central enchants. [00:24:16] Speaker 04: Mr. Hurt mentioned [00:24:18] Speaker 04: Hundreds of pages of code. [00:24:22] Speaker 04: I didn't recall there being hundreds of pages of code. [00:24:27] Speaker 03: For each one, he said, so I multiplied that. [00:24:34] Speaker 00: I multiplied that, 50. [00:24:41] Speaker 00: A couple of points on the source code. [00:24:44] Speaker 00: One is, of course, it can be that Alice would have come out differently if the patent had issued before the Supreme Court had some generic source code dependent to it. [00:24:56] Speaker 00: That cannot be the difference here between a patent-appointed patent or a subject-appointed patent and a patent-appointed patent. [00:25:00] Speaker 00: Second, Intellectual Ventures has outgrew this issue three times, twice in this court and once in the district court. [00:25:07] Speaker 00: It has never identified a single line of that code and pointed to it and said, you see, this is where we're doing something that was hardly known in the past. [00:25:16] Speaker 00: And then third, it is the district court right on account of the plaintiff on the business source code. [00:25:22] Speaker 00: So the focus is on those three points. [00:25:24] Speaker 00: I would like to make one point about the claims that I think is important because of the amount of time Mr. Hurd focused on the idea of a human not being able to see data that's passed in the data marker. [00:25:39] Speaker 00: And that is, these claims are written in this document. [00:25:43] Speaker 00: Right? [00:25:44] Speaker 00: If you're using any one of the three predetermined selection criteria, then you would have been disposed of the problem. [00:25:53] Speaker 00: And I don't think there's any argument that even with respect to the end of data market, humans couldn't do that. [00:25:57] Speaker 00: There's clearly, as the district court pointed out at the footnote four of this brief, a non-computerized analog to that. [00:26:06] Speaker 00: I'm sorry, district court's opinion that there's a non-digital analog to it. [00:26:11] Speaker 00: But even if we assume that humans cannot look at past the end of the data marker, there's no question that, for example, humans could look at the file size of sequentially, sequential files that have the exact same size and then make a determination about whether the aggregate size of those files exceeds a predetermined threshold. [00:26:34] Speaker 00: And that's one of the criteria. [00:26:36] Speaker 00: There's really no debate that people could do that. [00:26:40] Speaker 00: I think with the remaining time, Your Honors, I'd like to answer any questions that you have. [00:26:48] Speaker 00: Otherwise, we would have submitted it, considering some antech, considering fair warning, and Your Honors' prior decision in the prior appeal in this case, this package back in the subject. [00:27:01] Speaker 07: Thank you. [00:27:32] Speaker 02: by moving the microphone. [00:27:39] Speaker 02: I can begin by addressing the point that you just counseled me regarding the program, where the distinction counsel made was that for every decision, the process used by computer animators was a subjective process. [00:27:54] Speaker 02: Here, the process used by prior IT administrators was objective. [00:28:00] Speaker 02: Specification, [00:28:01] Speaker 02: provides two examples of what prior computer administrators did, and essentially a search for large files and a file-by-file review, both of which required an amount of subjective determination to figure out is this really enough for us [00:28:21] Speaker 02: or not, and here are the rules that are claimed in the invention and selected steps in the digital signature process are different than what the firearm industry has used since the computer age, computerization [00:28:40] Speaker 02: of those processes is still something different. [00:28:46] Speaker 02: It's particularly violent identification applications claimed. [00:28:51] Speaker 02: Claim 10, these operations are in each claim. [00:28:53] Speaker 02: And that's what distinguishes this case from Fairmont. [00:28:55] Speaker 02: They're the criteria, the things that humans did hear. [00:28:59] Speaker 01: There's no evidence in record at this stage in the case the humans performed the invention. [00:29:06] Speaker 01: This claim steps at the same point as the invention. [00:29:07] Speaker 01: Thank you. [00:29:15] Speaker 07: We thank both sides. [00:29:16] Speaker 07: The case is submitted. [00:29:18] Speaker 07: That concludes our proceedings for this morning. [00:29:20] Speaker 07: Thank you. [00:29:47] Speaker 01: Notable course adjourned for tomorrow morning at 10 a.m.