[00:00:02] Speaker ?: Case number 19-1063. [00:00:05] Speaker ?: Program suppliers, appellant versus copyright, will be warned at L. [00:00:45] Speaker 01: Did I say that correctly? [00:01:12] Speaker 03: Yes. [00:01:13] Speaker 01: Thank you. [00:01:31] Speaker 03: Good morning. [00:01:32] Speaker 03: Good morning. [00:01:32] Speaker 03: May I place the court? [00:01:33] Speaker 03: My name is Gregor Lanron. [00:01:34] Speaker 03: I'm counsel for petition on program suppliers. [00:01:39] Speaker 03: Your Honor, we urge the court to rule in favor of program suppliers for a couple of reasons. [00:01:45] Speaker 03: First, the copyright royalty judge's final determination is not a product of reason's decision making. [00:01:54] Speaker 03: Second, the copyright royalty judge is arbitrarily excluded [00:01:58] Speaker 03: the corrected testimony of one of program suppliers' critical witnesses. [00:02:08] Speaker 03: As an initial matter, the standard of review in this case is the APA standard of review. [00:02:18] Speaker 03: The government contends that the operative standard in this matter is the exceptionally deferential standard. [00:02:25] Speaker 03: That is not correct. [00:02:27] Speaker 03: The exception of the deferential standards, it's a long-discarded standard that was actually replaced many years ago by the APA. [00:02:36] Speaker 03: So the APA is the operative standard in this case. [00:02:39] Speaker 03: The APA is a much less deferential standard. [00:02:42] Speaker 03: It is required among other things for reason decision-making by the agency as well as conclusions that are supported by substantial evidence. [00:02:52] Speaker 02: APA review is pretty deferential. [00:02:56] Speaker 02: with regard to determinations that are bound up in these complicated judgments about how best to model, which regression to use. [00:03:08] Speaker 03: That's correct, Your Honor. [00:03:10] Speaker 03: The APA standard is deferential. [00:03:13] Speaker 03: Agency decisions in general are deferential, but there is a distinct difference between the APA standard and the exceptionally deferential standard. [00:03:23] Speaker 03: And I think the NAB case, I think in 1998, goes to great lengths to explain the difference between these two standards. [00:03:34] Speaker 03: The final agency determination of Blacks was in decision-making because the Comprehensive Realty Judges adopted a share allocation methodology based on inaccurate and incomplete record. [00:03:50] Speaker 03: We now know this because very recently we had a revelation about documents that were supposed to be produced in discovery. [00:03:59] Speaker 03: During the case, there were not. [00:04:02] Speaker 03: Long and short of the effect of this privilege, not only do we have a very serious discovery violation, we also have, during life testimony, misrepresentation of the extent of the witnesses' work by the witness when it was being questioned. [00:04:18] Speaker 03: And that included not just the number of regression models that we now know the witness conducted, but also the extent of their regression work and the type of regressions that the witness conducted, as well as the timing of when the witness conducted these. [00:04:41] Speaker 00: Are you speaking now about the evidence that arose in the satellite proceeding? [00:04:46] Speaker 03: Yes, sir. [00:04:47] Speaker 00: that programs of fire settled that. [00:04:51] Speaker 00: Isn't that correct? [00:04:52] Speaker 03: That's correct. [00:04:53] Speaker 03: But that does not detract from the fact that we now found that we have evidence that was supposed to be produced in a prior proceeding that we did not know of. [00:05:04] Speaker 03: The critical point to this is that the judges rely... Could you have obtained that information through discovery? [00:05:13] Speaker 03: I'm sorry, Your Honor? [00:05:13] Speaker 01: Could you have obtained that same information through discovery? [00:05:17] Speaker 03: Yes, we would have. [00:05:19] Speaker 03: We actually did a profound discovery on commercial television claimants who were the authors of the response of the witness that performed the regression. [00:05:33] Speaker 03: And during that discovery, they told us that they had produced all responsive documents, which is... What did you ask for all regression analyses performed? [00:05:44] Speaker 03: Well, we usually ask for all underlying documents. [00:05:48] Speaker 01: I don't know what you usually do. [00:05:49] Speaker 01: I wonder what you ask for in this case. [00:05:50] Speaker 03: We ask for documents responsive to the witness' work. [00:05:56] Speaker 01: And one of the judges... I guess this is a specificity thing. [00:06:00] Speaker 01: Is your argument here that you asked for this specific information and somehow someone had an improper, unlawful [00:06:11] Speaker 01: the sufficient response to discovery. [00:06:14] Speaker 01: Or you asked but maybe you didn't have the targeted question that you should have and now you've discovered, had we only known we would have asked for the 600 regression and draft regression analysis. [00:06:28] Speaker 03: I understand what you're saying. [00:06:30] Speaker 03: We generally, we asked for document underlying the witnesses regression analysis. [00:06:39] Speaker 03: There's a regulation within the judges' regulations. [00:06:44] Speaker 01: And the other party objected as overbroad, right? [00:06:49] Speaker 03: There was, I don't recall precisely, but I know that there was a, there almost always is a general objection at the beginning of the response. [00:06:59] Speaker 01: Right, and then what did you do after that objection? [00:07:01] Speaker 03: I'm sorry, Your Honor? [00:07:02] Speaker 01: What did you do after that objection? [00:07:03] Speaker 03: We did not do anything. [00:07:06] Speaker 01: To me at least this is relevant. [00:07:12] Speaker 01: So did you do anything to enforce your discovery request? [00:07:16] Speaker 03: Not when they responded to us that there were no additional responsive documents. [00:07:21] Speaker 01: I thought the response was that your request was overbroad, so then did you do a narrower one? [00:07:26] Speaker 03: It was a general objection that was at the beginning of the question itself. [00:07:33] Speaker 01: Then what did you do to get this information? [00:07:36] Speaker 03: We didn't do anything because we didn't know the information existed in the first place. [00:07:40] Speaker 01: Well, that's what discovery is for. [00:07:42] Speaker 03: I'm sorry? [00:07:43] Speaker 01: That's what discovery is for. [00:07:44] Speaker 01: You didn't do anything when they objected, you just let it go. [00:07:47] Speaker 03: But when the party responds, objecting, you can object when you have the document. [00:07:53] Speaker 03: And you don't think that, you think there's grounds for not producing this. [00:07:56] Speaker 03: But an objection is not valid. [00:07:59] Speaker 03: If the witness has, if the responding party has the document, but says that he is, and is obligated to produce it, and fails to do so, and responds that [00:08:11] Speaker 03: it has produced all responsive documents. [00:08:15] Speaker 03: The critical point about this is the way the judges discovered regulations were, you have to produce not just documents underlying [00:08:23] Speaker 03: your testimony that you presented to the royalty judges, you also have to produce alternative actions considered. [00:08:34] Speaker 03: This was the trick. [00:08:36] Speaker 03: They did not produce those documents. [00:08:39] Speaker 03: They turned out to be critical because the regression analysis was what the judges relied on for the purposes of determining the shares. [00:08:48] Speaker 03: And here are the consequences of this. [00:08:52] Speaker 03: Not only was there serious discovery violation, the witness on the stand was questioned in any number of ways about whether or not he performed additional regressions. [00:09:03] Speaker 03: He said no. [00:09:04] Speaker 03: In some cases he couldn't recall. [00:09:07] Speaker 03: They asked him about the types of regressions that he performed. [00:09:10] Speaker 03: He denied that he performed them. [00:09:13] Speaker 03: It turns out there's 600 plus regression models that the witness performed that never saw light of day. [00:09:21] Speaker 03: The judges, in turn, never had an opportunity to question the witnesses about those 600 and how the witness arrived at providing written testimony only for two regression analysis. [00:09:33] Speaker 03: And not only did the judges not have an opportunity to question the witness, program suppliers never not had an opportunity to do so either. [00:09:42] Speaker 03: The consequence of this is that the judges rely... Did you guys ask this witness? [00:09:48] Speaker 03: I'm sorry. [00:09:48] Speaker 01: Did you guys ask this witness at the hearing how many regression analyses did you perform? [00:09:52] Speaker 03: We did not, but another party did. [00:09:56] Speaker 01: Why didn't you? [00:10:01] Speaker 01: You say it's really important to know how many regression, you just made a powerful argument for why it's important to know how many. [00:10:07] Speaker 01: how many rounds they went before they got the answer they're testifying about. [00:10:10] Speaker 01: So I'm curious as to why you wouldn't have asked that. [00:10:13] Speaker 03: But again, when the responding party says we have no additional responsive documents, that's where it stops. [00:10:22] Speaker 03: And we have no reason, we take it on good faith that the parties have no additional documents to produce. [00:10:30] Speaker 01: So it was quite... And the other party moved to strike the testimony because they hadn't been sufficiently responsive, but you didn't do that. [00:10:36] Speaker 03: We did not do that, again, but there was another party that did that. [00:10:42] Speaker 01: But nothing would have stopped you from asking, just to confirm, if this is really that important at the hearing, you could have asked, how many did you do but you didn't? [00:10:50] Speaker 03: No, the parties had already responded that there were no additional documents. [00:10:54] Speaker 03: When a party says there are no additional... Well, in response to your discovery, they just said it was overbroad. [00:11:00] Speaker 03: Well, overbroad compels you to return and ask, [00:11:06] Speaker 03: Well, can we narrow the discovery? [00:11:08] Speaker 03: Can we narrow the question? [00:11:10] Speaker 03: But usually when the responding party says, we have no more documents, we take it on good faith that there are no more documents. [00:11:17] Speaker 03: So we don't pursue that discovery. [00:11:19] Speaker 03: This is very different. [00:11:20] Speaker 03: This is a situation where the parties responded that they had no documents. [00:11:25] Speaker 03: They had no responsive documents. [00:11:27] Speaker 03: It's very different from where we're objecting to the question you're asking. [00:11:32] Speaker 03: There's a lot of real estate between objections [00:11:35] Speaker 03: the question versus saying, we've given you everything that's responsive to your question. [00:11:41] Speaker 03: And this is what happened in this case. [00:11:42] Speaker 03: And in the motion to strike case and the ruling that the judges found the rule that the party that sought to strike the testimony should have moved to compel, that ruling did not come out till the hearing was over. [00:11:57] Speaker 03: So even if we had an opportunity to compel, we could not have done so because the [00:12:03] Speaker 03: the proposed findings had already been submitted in that case. [00:12:07] Speaker 03: So this is the critical, this is one critical point. [00:12:10] Speaker 03: The inaccuracy of the record and incomplete of the record alone beyond anything else is sufficient to remand this case because the judges had no knowledge that 600 plus regressions were out there that they could have taken into consideration, which not only affects the way the judges looked at the evidence, but it also affects the credibility of the witness to be quite honest. [00:12:33] Speaker 03: So that is the one reason. [00:12:36] Speaker 03: There are other problems with why this was not a reasonable decision-making. [00:12:40] Speaker 03: The other problem is that the way that the judges treated precedent in this case is also not based on reasonable decision-making. [00:12:49] Speaker 03: There have been two, the last two decisions in this case, the 98-99 royalty years and the 04-05 royalty years, [00:13:00] Speaker 03: both employed, serving evidence. [00:13:03] Speaker 02: You say the way they've treated precedent. [00:13:07] Speaker 02: Usually when we talk about an agency changing positions and having to explain itself, the change is with respect to some legal issue. [00:13:19] Speaker 02: Here, there's a change in the sense that there were prior determinations in which [00:13:28] Speaker 02: the agency emphasized surveys as opposed to regressions. [00:13:34] Speaker 02: Here they use the regression as a starting point and then they look to surveys. [00:13:40] Speaker 02: It seems different and to the extent, every case is different. [00:13:47] Speaker 02: They've explained, they've given reasons to think why this regression analysis is better than ones that [00:13:56] Speaker 02: were tendered in the prior cases. [00:13:58] Speaker 02: It doesn't seem problematic to me. [00:14:05] Speaker 03: Well, in the 2005 program supplier's decision, which we were on that case also, [00:14:15] Speaker 03: The copyright royalty judges had started moving from viewing methodology towards survey evidence. [00:14:27] Speaker 03: And the court, this court, recognized that the movement, the shift from viewing methodology to survey evidence was a departure from precedent. [00:14:41] Speaker 03: and that the judges probably explained provider reasoning for that. [00:14:50] Speaker 03: So something similar has happened here, except that there is no, first of all, they moved, the judges moved from a survey evidence to make in regression analysis as a starting point. [00:15:05] Speaker 03: In prior proceedings, regression analysis- Well, instead they relied on both. [00:15:10] Speaker 01: I mean, they said, look, we have all these methodologies in front of us. [00:15:14] Speaker 01: I'm on JA3949. [00:15:17] Speaker 01: And we're struck by the relative consistency of all these answers. [00:15:22] Speaker 01: And so we conclude that the Horowitz survey and Professor Crawford's regression analysis with adjustments together are the best available measures. [00:15:35] Speaker 01: They weren't abandoning surveys. [00:15:38] Speaker 01: And in the past, they had done [00:15:40] Speaker 01: looked at the regression analysis to sort of support the surveys. [00:15:45] Speaker 01: And here they said this is a much better regression analysis. [00:15:47] Speaker 01: So we're going to look at both pieces. [00:15:49] Speaker 01: These two, they went through all the pieces of evidence and these two are the ones and we're going to look at them together. [00:15:55] Speaker 01: That's what we're relying on. [00:15:57] Speaker 01: So they're quite expressed about what they're doing. [00:16:00] Speaker 01: They do talk about, you know, in the past we've done more of that but here's the problem with the surveys here and here's the much improved. [00:16:09] Speaker 01: regression analysis. [00:16:11] Speaker 01: So it seems to me they were quite upfront about, acknowledged what they were doing, acknowledged what had happened in the past, and explained why they did what they did, which seems to me entirely sufficient. [00:16:23] Speaker 03: I can't remember eight more seconds, but I want to please answer your question. [00:16:27] Speaker 03: There are a couple of points on that. [00:16:31] Speaker 03: The judges, that determination was actually very inconsistent. [00:16:34] Speaker 03: They said in the same final determination that they were treating [00:16:38] Speaker 03: the Crawford regression analysis in the same manner as the Wall-Fogel analysis, which was in the prior decision. [00:16:45] Speaker 03: The fact is they did not. [00:16:47] Speaker 03: In the prior determination, the Wall-Fogel analysis was treated as corroborative evidence. [00:16:58] Speaker 01: This one was much better. [00:16:59] Speaker 01: This one didn't have a lot of the problems. [00:17:01] Speaker 03: That goes to my next point, Ewan. [00:17:03] Speaker 03: Just because it's better does not make it better than [00:17:07] Speaker 03: the decision that the way that the survey evidence has been treated in the past. [00:17:15] Speaker 03: As a matter of fact, the board survey was the evidence that was using the 1998-99 relative year determination as well as the 0405. [00:17:26] Speaker 03: They found that the Horowitz survey evidence most reflects relative value of the completed program. [00:17:36] Speaker 01: relative to the other surveys. [00:17:38] Speaker 01: That wasn't compared to the other surveys. [00:17:41] Speaker 01: At the end, they talked about how Horowitz fit in with all the other methodologies. [00:17:46] Speaker 01: So that's all they said there. [00:17:48] Speaker 03: I understand, but my point is simply saying that the data that we have for regression is better [00:17:56] Speaker 03: just as the data for the Horowitz survey was better as compared to boards. [00:18:03] Speaker 03: Just almost all, it's not unusual for methodological presentations to improve over time. [00:18:11] Speaker 03: The point is, did this improve? [00:18:14] Speaker 01: Are the royalty judges allowed to respond and change how they value those methodologies based on their improvements or are they stuck [00:18:25] Speaker 01: with how they responded to the first round? [00:18:28] Speaker 03: They have the discretion to do that, and my point is that they did not explain the shift. [00:18:37] Speaker 03: Not only did they not explain the shift, I mean, the regression analysis has massive problems. [00:18:44] Speaker 03: So even if we disagree on whether or not one was better than the other, in this record, based on the new revelations that we have, [00:18:53] Speaker 03: 600 regression models. [00:18:55] Speaker 01: If we put that aside, then what would you say about the regression analysis? [00:19:00] Speaker 03: I would say that they did not explain it. [00:19:02] Speaker 03: They found the survey evidence that had been relied on from Poseidon to Perseidon, they found another survey evidence that's actually better than that. [00:19:13] Speaker 03: But the link they did not make was why was the regression analysis better than [00:19:20] Speaker 03: that survey evidence, whichever survey evidence. [00:19:23] Speaker 03: In this case, it happened to be Horowitz. [00:19:24] Speaker 01: They didn't have to say better. [00:19:25] Speaker 01: They said we're going to rely on both of them. [00:19:27] Speaker 03: Well, it's a big difference between, as an aside, by the way, the parties that presented the fees-based regression, both parties did not present this regression analysis as principal evidence. [00:19:43] Speaker 03: They presented them as corroborative of [00:19:46] Speaker 03: the survey evidence, in this case, the board's evidence. [00:19:50] Speaker 03: So the parties themselves did not argue for this to be principal evidence. [00:19:54] Speaker 03: And there are inherent flaws with regression analysis overall. [00:19:59] Speaker 03: One of the key flaws of the regression analysis is the fact that the regression analysis is not a market value concept. [00:20:10] Speaker 03: All of the data that are key to regression analysis are data that are [00:20:16] Speaker 03: extracted from the regulatory scheme. [00:20:20] Speaker 03: The regression purports to show a relationship between programming minutes and fees paid by cable system operators, but that is not the case. [00:20:35] Speaker 03: All of the information that makes up the regression model is based on [00:20:42] Speaker 03: not free of the strictest of our free market, which they're supposed to be in order to make it a market value concept. [00:20:50] Speaker 01: All right. [00:20:50] Speaker 01: Thank you very much. [00:20:51] Speaker 03: Thank you. [00:21:14] Speaker 04: May it please the court, Martin Totaro on behalf of the copyright royalty judges in the Library of Congress. [00:21:20] Speaker 04: The challenge for the copyright royalty judges in this proceeding was to decide how to allocate fees the government has collected to copyright owners that provide content to cable systems. [00:21:31] Speaker 04: Now the problem is that the fees are set by statute rather than the market. [00:21:36] Speaker 04: and the amount of fees don't generally vary based on the type of programming. [00:21:41] Speaker 04: So on this record, there were multiple different methodologies, and the copyright royalty judges used the methodology offered by Dr. Gregory Crawford as a starting point in his analysis. [00:21:52] Speaker 04: That choice was immediately reasonable. [00:21:54] Speaker 04: Cable systems decide which distant broadcast systems to retransmit to their subscribers. [00:22:01] Speaker 04: So they have a menu before them and then they have to pick which particular programs to retransmit. [00:22:07] Speaker 04: Now Crawford sought to establish a statistical relationship between the amount of royalties that a cable system paid and the value of that particular programming. [00:22:19] Speaker 04: So the regression he looked at improved on prior regression analyses [00:22:24] Speaker 04: in multiple ways as thoroughly discussed by the copyright judge, royalty judges in the final order. [00:22:31] Speaker 04: The biggest improvement is that he eliminated sampling in collecting his data. [00:22:36] Speaker 04: So he had the entire universe of programming for the four years at issue here retransmitted by the Form 3 cable system. [00:22:45] Speaker 04: That's a difference in measurement between using an abacus and a calculator. [00:22:50] Speaker 04: And that's the principle advancement. [00:22:53] Speaker 04: He also looked at every year instead of a sample year. [00:22:57] Speaker 04: And he also used what's called a fixed effects analysis, which looked at differences within particular cable systems in something called the subscriber group, which was made possible [00:23:08] Speaker 04: by a congressional amendment, a congressional enactment in 2010 that allowed all of this data to be collected at the subscriber group level. [00:23:18] Speaker 04: And that allowed Profford to have many thousands more data points when constructing his [00:23:24] Speaker 04: analysis. [00:23:26] Speaker 04: The Crawford analysis is by no means perfect. [00:23:29] Speaker 04: There is, as this Court recognized in its 2005 Settling Devotional Claimants decision, necessary imprecision because we have this mismatch between trying to replicate a relative marketplace on the one hand and a compulsory licensing scheme on the other. [00:23:44] Speaker 04: If there are no particular questions about the Crawford analysis, I'd... I'm stepping in dangerous waters here, but [00:23:53] Speaker 01: I don't know if you're able to just explain to me how, I didn't see the judges, the royalty judges laying this out, but if they did, you can point me to it. [00:24:02] Speaker 01: So as I get it, what Crawford did was he correlated the number of minutes of viewership and the amount of royalties paid for each category. [00:24:14] Speaker 04: Yes, Your Honor. [00:24:14] Speaker 01: Is that right? [00:24:15] Speaker 01: Which essentially produces dollars per minute of viewing. [00:24:20] Speaker 04: Yes, but it's to not make things overly complicated. [00:24:25] Speaker 04: It would be very easy to just add up minutes broadcast with total minutes by all the categories, but the difficulty here is there are many, many potential confounding variables that get in the way. [00:24:40] Speaker 04: And so what Crawford did in his regression analysis in this [00:24:44] Speaker 04: starts at JA910 where he sort of lays it out in his appendix A. The different control factors he used including, for example, whether a particular cable system has availability of a certain type of programming, the number of subscribers and things of that sort. [00:25:02] Speaker 01: And so his... So he's controlling a lot of stuff but at the end of the day he was just showing dollars per viewership minute. [00:25:12] Speaker 04: Well, he's trying to establish a relationship between the choices that a cable system makes in the types of various programming it chooses. [00:25:19] Speaker 04: And so the idea here is that a cable... Yeah, I still get how the dollars and the viewers... It's not viewers, it's the amount of material made available to viewers as opposed to viewership numbers in and of themselves because on this record and elsewhere it's... [00:25:35] Speaker 04: pretty easily demonstrated that viewership isn't a proper number. [00:25:38] Speaker 04: And so he's looking at the types of programming that particular cable systems at the subscriber group level pulled and then retransmitted to their individuals. [00:25:49] Speaker 04: And so from that determination, you can add up the total number of minutes in, for example, the sports category or the program suppliers category or the public television category. [00:25:58] Speaker 04: And so one of the nuances of Crawford's analysis is that he found out that [00:26:04] Speaker 04: He used a metaphor, what he called sort of a fixed price buffet. [00:26:09] Speaker 04: He talks about this on JA 1481, that you can determine value by looking at differentiated programming. [00:26:17] Speaker 04: So, for example, if there are two sorts of sitcoms from the 90s that are reruns that are being transmitted, [00:26:22] Speaker 04: the subscriber might not care whether it's one particular sitcom or the other. [00:26:29] Speaker 04: But if there's some sort of niche programming that you really can't find anywhere else, and that's unique, and then the cable system picks up that particular program and retransmits it, then that has a particular type of value. [00:26:40] Speaker 04: So then Crawford and the particulars of the regression are laid out at JA913. [00:26:45] Speaker 04: He was able to take all of this information and come to [00:26:48] Speaker 04: permanent or marginal basis for the value for each particular type of the six different categories that issue here. [00:26:55] Speaker 04: And then he was able to look at the total number of minutes transmitted, which again, because he didn't use sampling, to arrive at the particular allocation percentages. [00:27:03] Speaker 01: So at the end of the day, he showed minutes that each category was transmitted? [00:27:10] Speaker 04: Controlled for a variety of other different factors. [00:27:13] Speaker 01: But not, the payments didn't matter at that point. [00:27:16] Speaker 01: which is what they were choosing to transmit the most of. [00:27:20] Speaker 04: Yes, but you had to look at the total number of payments and you pay for a particular, what's called a distance signal equivalent that has an array of particular programs. [00:27:28] Speaker 04: So the royalties didn't come into the analysis. [00:27:34] Speaker 02: Oh, it did not? [00:27:36] Speaker 02: It did. [00:27:36] Speaker 02: So yeah, so here's, I had a similar befuddlement [00:27:42] Speaker 02: When you say, you've framed this as the regression seeks to correlate marginal changes in hours to royalties. [00:27:55] Speaker 04: Yes, Your Honor. [00:27:55] Speaker 02: And I thought that's very perplexing because royalties is not a market, it's a statutory formula. [00:28:04] Speaker 04: It is perplexing and that's the problem. [00:28:06] Speaker 02: And then my law clerk explained to me that the royalty [00:28:11] Speaker 02: formula, the way that works is you just pay, it's a percentage of revenue. [00:28:18] Speaker 02: You opt into this program and then you pay whatever, 1% of your revenue or whatever. [00:28:24] Speaker 02: Yes, Your Honor, that's correct. [00:28:25] Speaker 02: And so then I thought, well, okay, what you're really saying is you're correlating on the margin hours to revenue. [00:28:35] Speaker 02: So if other things equal, from one year to the next, you add an increment of sports programming, and then it turns out your revenue goes up by a certain amount, controlling for everything else, that seems like it's a rough measure of marginal value. [00:28:56] Speaker 02: And so my question is, A, am I understanding all of that correctly? [00:29:01] Speaker 02: and B, if I am, why don't you just express the regression as correlating marginal hours to marginal revenue? [00:29:11] Speaker 04: Your Honor, I don't think it's a revenue analysis. [00:29:13] Speaker 04: I think it's keyed toward the fees paid under the compulsory licensing scheme and the minutes transmitted not viewed spectering in a variety of these other particular control variables because it's important [00:29:29] Speaker 04: sort of put this distant retransmitted signals in overall context. [00:29:34] Speaker 02: These are going out under... Am I right or wrong that fees paid, royalties paid by the cable stations are directly proportional to their revenues? [00:29:46] Speaker 04: Yes, that's correct. [00:29:47] Speaker 04: The Form 3 systems pay, I think it's a little more than 1% or something like that. [00:29:52] Speaker 04: So you're correct that it's based on revenue in that sense, yes. [00:29:58] Speaker 01: How does that correlation show, because I thought this is what he was saying, how would that correlation that you're talking about between minutes and, I thought it was royalties, how is that going to show the desire to maximize profits by obviously broadcasting the stuff that people will get you the most viewers? [00:30:19] Speaker 04: I think the idea, Your Honor, is that you're picking different types of programming from [00:30:24] Speaker 04: you know, array of available programming, and these are going out on stations that have been bundled with lots of different programs. [00:30:30] Speaker 04: So what Crawford had to do was look at not only the station being retransmitted, but then also factoring the amount of minutes on that station in the different categories. [00:30:41] Speaker 04: And so I think taking all the control variables aside, the general purpose is if you're retransmitting more of a certain type of category, that category has more value. [00:30:57] Speaker 04: If I could move to the issue of the 600. [00:31:00] Speaker 01: It's going to have more value if it results in more subscribers, right? [00:31:07] Speaker 04: That is always the goal of the cable system. [00:31:10] Speaker 04: And the idea that Crawford then inferred is that if you're putting out a particular programming, a program, it's for the purpose of attracting or retaining a subscriber. [00:31:21] Speaker 04: And so that is related. [00:31:25] Speaker 04: If I could address the [00:31:26] Speaker 04: purported 600 preliminary regression analyses. [00:31:32] Speaker 04: I think our argument is set forth in our motion opposing remand that we filed on September 12th, and I'd just like to add two minor points. [00:31:40] Speaker 04: The first is that, as Judge Raab pointed out, the satellite allocation proceedings have been settled. [00:31:47] Speaker 04: program suppliers agreed to that settlement. [00:31:50] Speaker 04: Second, and probably more importantly, it's not surprising that an incredibly complicated progression analysis has preliminary analyses. [00:32:00] Speaker 04: Now the judges, because the case settled had no [00:32:04] Speaker 04: ability to decide on the validity of that challenge, whether they actually were preliminary analyses, whether they needed to be turned over, things of that sort. [00:32:13] Speaker 04: But the critical point on this particular record is that program suppliers could have developed it but chose not to or did not. [00:32:21] Speaker 04: And second, Crawford, comprehensive, complicated, whatever word you wanted to use to describe Crawford, it's also 100% transparent. [00:32:32] Speaker 04: And so his entire record [00:32:34] Speaker 04: His entire analysis, which starts on, I think, 866 of the Joint Appendix, is transparent for everyone to see, for everyone to criticize. [00:32:41] Speaker 01: Does it reveal the 600? [00:32:43] Speaker 04: It did not reveal preliminary analyses, but it revealed all the data that went into the model. [00:32:49] Speaker 04: It revealed the control. [00:32:51] Speaker 01: I know, but their point, if I understand it correctly, I don't mean to put words in their mouth, though, is that, well, if you do it 600 times and those results are all over the place, it looks like the one, you waited, you did it and did it and did it until you got the one you like. [00:33:04] Speaker 01: And then you put it all up with a nice analysis and make it look good. [00:33:10] Speaker 01: But if we knew that you had actually done a bunch of other 600 of these, it took you 600 times to get the one that looks good for purposes of this hearing, that would have really affected your credibility. [00:33:20] Speaker 04: The looked good is I think where the argument fails. [00:33:23] Speaker 04: If there's something wrong in that particular regression model, [00:33:25] Speaker 04: the parties can point that out. [00:33:27] Speaker 04: And I would just focus on how program suppliers, even in this appeal, is only focusing on two particular aspects of Crawford's regression model. [00:33:36] Speaker 04: First, the allegation that it can't be replicated. [00:33:39] Speaker 04: That's plainly incorrect. [00:33:43] Speaker 01: Expert Gray said he didn't... No, yeah, I thought they were making it different. [00:33:46] Speaker 01: I get the argument, replication. [00:33:49] Speaker 01: I thought their point was this is whole new evidence that [00:33:54] Speaker 01: And I have no idea if this is true, but the theory would be he ran this thing 600 times, which is way over the top on what you do to prepare a regression analysis in a context like this. [00:34:07] Speaker 01: And those results were all over the place, which show either he just ran it until he got the answer his clients wanted and then made it look real nice and turned it in. [00:34:20] Speaker 01: or there's something just really wrong with that methodology that it has all these different results when you look at what happens when it's run again and again and again. [00:34:28] Speaker 01: In a way that's relevant in a way that's got nothing to do with replication. [00:34:33] Speaker 04: And the copyright royalty judges could have addressed those arguments had that sort of argument been developed before the judges through discovery. [00:34:40] Speaker 04: But the record as it sits before [00:34:43] Speaker 04: the copyright royalty judges in this case and this court now, those 600 purported preliminary analyses aren't before the court. [00:34:51] Speaker 01: Well they're not because, at least there was another party who moved to strike this testimony because this information hadn't been disclosed and the royalty judges said no and that was before the hearing. [00:35:03] Speaker 01: So they set up this whole problem. [00:35:06] Speaker 04: The royalty judges said no. [00:35:08] Speaker 04: As the court well knows, it grants extreme deference to how royalty judges grant their [00:35:13] Speaker 01: Why wouldn't you let people see the preliminary tests? [00:35:17] Speaker 01: Why wouldn't you? [00:35:19] Speaker 01: What was a good reason for not letting them before the hearing? [00:35:22] Speaker 01: have access to this discovery. [00:35:24] Speaker 04: So to be clear, the copyright royalty judges have taken no position whatsoever on whether that discovery should have been turned over or not. [00:35:32] Speaker 04: All I'm merely saying here is that nothing precluded. [00:35:34] Speaker 01: Well, they had a ruling before the hearing when someone said, we can't have this witness in because they haven't given us enough information about the test and everything underlying his testimony. [00:35:46] Speaker 01: And they said no. [00:35:48] Speaker 04: There was a motion to strike that was denied. [00:35:51] Speaker 04: That wasn't appealed, and it certainly wasn't in the open grief or anything like that. [00:35:56] Speaker 04: And now we have a record where we have proffered information for all to see and criticize. [00:36:02] Speaker 04: And their criticisms are respectfully minor and substantial. [00:36:08] Speaker 04: And now there is this argument that there were preliminary analyses. [00:36:12] Speaker 04: But they simply could have developed that in the record before this court, just as they did in the satellite allocation proceedings that have now been [00:36:22] Speaker 04: settled. [00:36:25] Speaker 04: I see my time has run. [00:36:27] Speaker 04: Happy to answer any other questions about any aspect of the case. [00:36:31] Speaker 01: We appreciate your presentation. [00:36:33] Speaker 04: Thank you. [00:36:36] Speaker 01: Thank you, Your Honor. [00:36:37] Speaker 01: Did Mr. Elaniran have any time left? [00:36:40] Speaker 01: We'll give you two minutes, Mr. Elaniran. [00:36:42] Speaker 03: Thank you, Your Honor. [00:36:46] Speaker 03: Just quickly, with – you were going to ask me a question earlier about what was the language of the discovery request, and I'm just going to – it's on – you'll find that on page four of [00:36:59] Speaker 03: of our motion to remand, where we articulate – where we repeat the questions we ask, and the response was that beyond the documents and data produced previously by CTV – that's commercial television claimants – and the additional documents and data being produced today, there are no other documents to be produced in response to your request. [00:37:22] Speaker 03: So that was how they responded to our question. [00:37:26] Speaker 03: I just want to touch really quickly on the problem, on the fundamental problem with the regression analysis. [00:37:35] Speaker 03: The regression analysis, the standard for allocation is the relative marketplace value. [00:37:43] Speaker 03: All of the data, again, that goes into the regression analysis does not come from the market. [00:37:49] Speaker 03: It comes from the data, the relative payments that have been analyzed are relative payments [00:37:56] Speaker 03: that are required to be paid by statute. [00:38:00] Speaker 03: The statutory requirement really dictates what drives royalties. [00:38:05] Speaker 03: There are three principal components. [00:38:09] Speaker 02: If it's the case that your royalty payment [00:38:15] Speaker 02: is a percentage of your revenue? [00:38:19] Speaker 02: That's what I was asking your friend about. [00:38:21] Speaker 03: By statute, it is by statute, right? [00:38:24] Speaker 02: It is. [00:38:24] Speaker 03: There are three components to royalty payments. [00:38:27] Speaker 03: Groceries of the cable system, the type of stations that the cable system carries, and the number of those types of stations, the combination. [00:38:36] Speaker 03: Those are the elements that drive the royalty payments. [00:38:40] Speaker 03: There is no combination on any station [00:38:43] Speaker 03: or a combination of station, there's no combination or mix of programming. [00:38:47] Speaker 02: But the greater the revenue for the cable company, the greater its royalty payments. [00:38:54] Speaker 02: That's correct, but my... So as a just very rough way of thinking about kind of how to approximate a market where there is no market, what's wrong conceptually with saying we'll look at [00:39:11] Speaker 02: how you change the programming on the margin and how royalties which correlate to revenue increase. [00:39:21] Speaker 03: I think that's a very important question. [00:39:24] Speaker 03: But we don't have that market. [00:39:31] Speaker 03: that you have to look for information about the market that will drive your estimation. [00:39:37] Speaker 03: The behavior of the cable systems is based solely on the regulation at that point. [00:39:44] Speaker 03: And to the extent that you want to look at something like that, you might look at Dr. Gray's analysis where he took, there's a minimum payment that's required regardless of whether you're carrying one or none. [00:39:58] Speaker 03: Whether you, there's a minimum payment required to be paid by cable, required of cable systems. [00:40:04] Speaker 03: that don't carry any distantly transmitted signals. [00:40:09] Speaker 03: And Dr. Gray did an analysis that's in the record about what happens when you look at those systems that actually, that choose to pay more than the minimum payment and what the behavior is like with respect to the program, with respect to the distribution of the shares. [00:40:34] Speaker 03: That's a dramatically different result that you wind up with. [00:40:39] Speaker 03: And I think that's actually closer to your question in terms of, well, what do you do? [00:40:47] Speaker 03: Your question, which is, well, if I only have to pay the minimum fee, why am I paying these additional fees? [00:40:56] Speaker 03: And I think that's close to your question, because the behavior of those paying additional fees is, I think, substantially different. [00:41:03] Speaker 03: from the behavior that Dr. Crawford attempted to model in his regression. [00:41:10] Speaker 03: And finally, Your Honor, you asked me earlier on, you quoted for the final determination the consistency of language that the final determination uses. [00:41:20] Speaker 03: And I'd like to direct you to page 26 of our opening brief that shows actually that there were more inconsistencies than the final determination actually reflects. [00:41:30] Speaker 01: Thank you very much. [00:41:31] Speaker 01: Thank you very much.