Authors:

  1. Walker Ravina, walkerravina@google.com (Google)
  2. Ethan Sterling, esterling@google.com (Google)
  3. Olexiy Oryeshko, olexiy@google.com (Google)
  4. Nathan Bell, nathanbell@google.com (Google)
  5. Honglei Zhuang, hlz@google.com (Google)
  6. Xuanhui Wang, xuanhui@google.com (Google)
  7. Yonghui Wu, yonghui@google.com (Google)
  8. Alexander Grushetsky, grushetsky@google.com (Google)


ABSTRACT

The goal of model distillation is to faithfully transfer teacher model knowledge to a model which is faster, more generalizable, more interpretable, or possesses other desirable characteristics. Human-readability is an important and desirable standard for machine-learned model interpretability. Readable models are transparent and can be reviewed, manipulated, and deployed like traditional source code. As a result, such models can be improved outside the context of machine learning and manually edited if desired. Given that directly training such models is difficult, we propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.

The proposed distillation methodology approximates a model’s univariate numerical functions with piecewise-linear curves in a localized manner. The resulting curve model representations are accurate, concise, human-readable, and well-regularized by construction. We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases. We demonstrate the effectiveness of the overall distillation technique and our curve-fitting algorithm using four datasets across the tasks of classification, regression, and ranking.

CCS CONCEPTS

Computing methodologiesMachine learning approaches.

KEYWORDS

Model distillation; human readable; piecewise-linear curves

1      INTRODUCTION

Interpretable models are critical for high-stakes decision-making scenarios [21] such as guiding bail or parole decisions, assessing loan eligibility, and guiding medical treatment decisions. In these cases, the explanation of a model’s output (e.g. individual feature contributions) should be examinable and understandable, to ensure transparency, accountability, and fairness of the outcomes.

To achieve intrinsic interpretability, univariate functions are widely used in interpretable models. In the classic Generalized Additive Models (GAMs) [11], the model is a sum of univariate shape functions,

where 𝑥𝑖 ’s are 𝑛 features and 𝑓𝑖 ’s are the shape functions. Such a model is simple but often less accurate than a model with feature interactions. Recently, Lou et al. [18] showed that adding a limited number of pairwise feature interactions allows GAM-style additive models to capture a significant fraction of the accuracy of a fully-interacting model. In many cases of interest, such feature interactions are intuitively captured with products of univariate functions,

or products of groups of features,

where the magnitude of one function (i.e. 𝑓𝑖 ) is modulated by a function (i.e. 𝑔𝑖 or 𝑔𝑖,𝑗) of another "context" feature (i.e.𝑐𝑖 ) [32]. In other cases, the interaction amongst features is adequately approximated by additive models of univariate functions nested within univariate functions,

where the outer function 𝑔𝑖 captures nonlinear behavior [7]. Indeed, the Kolmogorov–Arnold representation theorem [16, 27] guarantees that every continuous multivariate function of 𝑛 inputs can be represented as a sum of 2𝑛 such terms,

In practice a single outer function is often sufficient, yielding an interpretable model.

In the classic GAM models, splines are used as shape functions [11]. Another commonly used shape function is piecewise-linear functions [30]. These representations contain a small number of variables (e.g. knots) and thus are concise and human-readable. However, directly optimizing such representations often yields less accurate models than alternative model representations. For example, Lou et al. [17] showed that learning spline GAMs is less accurate than learning bagged boosted decision forest GAMs. Our experiments show similar results for directly optimizing GAMs composed of piecewise-linear curves using Stochastic Gradient Descent (SGD) methods. Broadly speaking, the model representations using decision forest GAMs have the advantage during model optimization, but the resultant models are not human-readable. This is the case even when there exists a simpler model with a concise, human-readable form that provides comparable accuracy.

Inspired by the model distillation work in which relatively small decision forests or neural networks can be distilled from much larger ensembles, but not trained directly from data, to match the accuracy of complex models [5, 12], we propose to distill interpretable models into readable representations in a separate process after model optimization. This decouples the initial, learned model representation from the final, published model representation. For example, the proposed distillation methodology can be applied to additive models trained using bagged boosted decision trees [17], as well as additive neural nets [2, 32].

In this paper, we describe a technique for distilling models composed of univariate components into human readable representations, in particular, the piecewise-linear curves described in Section 2.2. The output of our distillation technique is illustrated in Listing 1 and Figure 1, which show textual and graphical representations of piecewise-linear curves obtained by applying our approach to a decision forest GAM trained on the COMPAS dataset (described in Section 2.1). The distilled model is a concise representation of the decision forest GAM model and is converted to human-readable source code. From here on, we will use "curves" to refer to piecewise-linear curves, "curve models" to refer to models where each component is a curve, and "code" to refer to the textual representations of curve models or curves.

The rest of this paper is structured as follows. After presenting the preliminaries in Section 2, we elaborate on the benefits of using curve models in Section 3. We then describe the localized distillation process in Section 4 and the piecewise-linear approximation algorithm, sometimes referred to as segmented regression [30], for creating curve models in Section 5. Lastly, we present experimental results on for datasets: COMPAS, FICO, MSLR-WEB30K, and CWS in Section 6 and conclude the paper in Section 7.

2      PRELIMINARIES

Throughout the paper, we will use the data sets used in this paper as concrete examples to explain our methods. Thus, we first describe them in this section. We also give the formal definition of piecewise-linear-curves in this section.

2.1      Data Sets

We used the following four datasets to represent different settings: classification, regression, and ranking. The first three are publicly available.

In each case, we distill a decision forest GAM and evaluate the accuracy of the distilled curve models. The COMPAS and FICO datasets represent high-stakes domains [21] in which the benefits of curve models, discussed below, are particularly compelling. FICO, MSLR-WEB30K, and CWS have been previously studied in the context of interpretability [2, 7, 18, 32]. Furthermore, the results from MSLR-WEB30K demonstrate that the accuracy of this approach is not limited to small datasets.

2.2      Piecewise-Linear Curves

A piecewise linear curve (PWLCurve) is defined by a list of control points 𝑆 = [(𝑥𝑘 , 𝑦𝑘 )]𝐾 𝑘=1 through which the curve must pass. Between control points, output 𝑦 values are determined by performing linear interpolation between neighboring control points. Beyond the leftmost or rightmost control points, output values are capped to the 𝑦𝑘 -value of the neighboring control point. More formally, assuming 𝑥𝑘 ’s are ordered, i.e. 𝑥𝑘 < 𝑥𝑘+1, the definition of a piecewise linear curve can be described as:

In most cases of interest 5 or 6 control points, defining 4 or 5 interior segments, is sufficient to capture the desired behavior. We allow for an optional 𝑥-transformation, specified with the fx argument, to fit curves to data with different scales. When an 𝑥-transformation is present it is applied to the input value and 𝑥-values of all the control points, and then linear interpolation is performed in the transformed space. We support identity (default), log, log1p and symlog1p transformations. Here symlog1p is defined as sgn(x) * log1p(abs(x)) and is suitable for highly-variable features that take on both positive and negative values. Univariate categorical functions are represented by EnumCurve, which directly maps input values to outputs using a discrete mapping.

3 BACKGROUND & MOTIVATION

Interpretable models are critical for high-stakes decisions [21] and provide many advantages over more complex model structures [6, 10]. In this section we explain how distilling interpretable mod-els into curve models reinforces these benefits and addresses a variety of real-world engineering challenges. Here, one underlying theme is that distilling models into human-readable source code reduces a novel machine learning problem to an established software engineering problem with an abundance of existing solutions.

3.1      Greater Transparency

A model is transparent if it provides a textual or graphical representation that enables its behavior to be understood comprehensively. One way in which the proposed method provides greater transparency is by simplifying graphical depictions of a model while retaining its essential characteristics. It is often argued, implicitly or explicitly, that the shape plots of an interpretable model are an exact description of the model and therefore provide a reliable way to understand the model. While this claim is narrowly true, it is misleading in general. Unless given specific guidance, humans will naturally discount certain fine-grained details of the plots when developing an understanding of the model. By distilling interpretable models to a concise representation, we discard extraneous char-acteristics and reduce the mental effort necessary to understand the model. For example, it is not immediately obvious what understanding an individual should derive from the shape plots of the feature_0011 (body stream length), and feature_0128 (inlink number) features in the initially-learned MSLR-WEB30K model, shown in Figure 2. Indeed, different individuals may derive qualitatively different understandings from these graphical depictions. However, given the additional knowledge that the distilled curve model represented by the overlaid curves in Figure 2 has nearly identical accuracy, an observer can make much stronger inferences about the model’s essential characteristics. Interpretability can be increased even further by imposing monotonicity constraints. We discuss the effect of such constraints in Section 6.4.

Clearly when distillation yields a simpler model with comparable accuracy we would say the distillation process has succeeded. However, instances where distillation yields a model with inferior accuracy warrant further investigation because the apparent "failure” can often be attributed to essential characteristics of the teacher model that were not successfully transferred to the student because they violate a prescribed notion of human-interpretability. We examine one representative example of this phenomenon in Section 6.2. While a complete discussion of this principle is beyond the scope of this paper, we note that the idea can be viewed as an extension of the use of structural constraints to define “interpretable” models, just now applied to the structure of individual functions in the model. Under this policy, if the accuracy of a candidate model cannot be reproduced using a predefined class of expressive, “human-scale” functions (e.g. curves with a small number of truncated control points) its transparency would be called into question.

3.2      Constructive Regularization

The proposed method can also be viewed as a post-hoc regularization process that is completely compatible with, and complementary to, optimization-based regularization techniques (e.g. L1/L2 penalties or monotonicity constraints). In the context of regularization, our emphasis on conciseness is aligned with the minimum description length principle [28] for model selection. Ustun and Rudin [24] applied similar reasoning to motivate linear models with small, integer-valued weights. The constrained description length of curves provides limited capacity for capturing idiosyncratic behav-ior. As a result, curve distillation successfully removes aberrations from teacher model functions. This regularization effect can be seen in Figure 1 and Figure 2. The fewer segments the greater the effect. To find the most concise curve model we can repeatedly apply the proposed method with decreasing number of control points. Natu-rally, the optimality of this approach is subject to the limitations of our localized distillation methodology (see Section 4) and curve approximation algorithm (see Section 5). While it is difficult to directly compare models with different functional representations, comparing the length and readability of their corresponding code is instructive.

One practical advantage of curve-based regularization is that regularity is enforced by construction and the complexity of individual curves is readily apparent and quantifiable. Therefore, organizations that adopt curve models can set objective guidelines about model complexity that developers can anticipate when submitting model candidates for approval. Such guidelines can specify the maximum number of curve segments, maximum number of significant digits per curve control point, or monotonicity of the curve. Similar to the use of nothing-up-my-sleeve numbers in cryptography [29], curve models enable developers to preemptively address suspicions about potential weaknesses and constructively prove the robustness of a given model candidate. In general, standardizing development around curve models is a straightforward way for organizations to systematically enforce best practices, defend against common mistakes and pitfalls, and expedite model verification and approval. The accessible, readable nature of curve models enables organiza-tion members beyond engineers (e.g. executives, product managers, etc.) to participate in this approval process.

3.3      Readable, Editable Code

Curve model code can be read, reviewed, merged, and versioned like conventional source code. An example model for the COMPAS dataset is shown in Listing 1. One can understand how a curve model would behave under novel or extreme conditions by mentally “evaluating” the model under hypothetical “what if?” scenarios without the need for additional tools. Subjecting models to a traditional source code review process facilitates a more rigorous examination of the model’s characteristics and greater accountability than is possible with non-readable models. Indeed, conducting “model review” through source code review ensures that the candidate model itself - not some separate, potentially inconsistent description or artifact of the model or how it was trained - is the subject of review. In the event that undesirable model behavior is discovered, the model’s code may be directly edited to correct such issues. For example, in the case of the COMPAS model a user may wish to deliberately cap the contribution of features such as priors_count and length_of_stay features for legitimate policy reasons not captured by classification metrics such as AUC-ROC. The contribution of other features can be entirely removed. Agarwal et al. [2] discussed how such an approach of training with biased features and then removing them can potentially be better than simply training without biased features. This approach can prevent the model from extracting bias through other features which are correlated with biased ones.

Model transparency is essential in the context of high-stakes decisions [21] arising in criminal justice, finance, health care, and other areas. Providing the complete source of the model in simple, portable, human-readable code makes the models transparent. Compared to human-readable models produced by CORELS [3], which are expressed in universally-understandable if-then language, curve models sacrifice accessibility for greater expressiveness and general-purpose application.

3.4      Collaborative Model Development

Curve distillation is compatible with any algorithm or modeling technique that results in univariate functions. In the experiments section we apply the proposed technique to decision forest GAMs on several datasets. Previous work [32] applied the proposed technique to GAMs learned via neural networks, as well as similar neural networks with limited interactions via multiplicative pairs. Organizing collaborative development around curve models enables engineers to apply a plurality of different tools, techniques, or platforms to optimize components of a (potentially large-scale) model. Engineers are free to choose a modeling approach that maximizes their productivity, similarly to how engineers use multiple IDEs, code formatters, or linters to collaboratively develop software. Curve distillation can be viewed as a “format conversion” tool that translates an arbitrary and potentially exotic model representation into a fixed, agreed-upon vocabulary of human-readable building blocks.

3.5      Straightforward Deployment

Curve models are fast-to-evaluate and straightforward to deploy. Since evaluation requires minimal computation - just a handful of floating point operations per curve - curve models are well-suited for performance-critical applications. Curves are a portable, platform-agnostic representation that can be natively supported in a variety of languages or systems with little effort. For example, Listing 2 shows a C++ implementation of a COMPAS model with 2 segments. In general, curve models are straightforward to deploy because they offer a multitude of integration options. Curves can be embedded in configuration files, passed via CGI parameters, manu-ally embedded into complex applications in a piecemeal fashion, systematically translated to a target representation, or evaluated by existing runtime systems with a few incremental extensions.

4      LOCALIZED DISTILLATION

Our distillation process takes two inputs: a teacher model containing one or more univariate functions, and a representative dataset (generally the training data). Our method differs from conventional distillation techniques in that we (1) distill each univariate function in isolation and (2) optimize for mean squared error (MSE) when approximating each univariate function. Specifically, each univariate function in the teacher model is evaluated on the dataset to produce representative (𝑥, 𝑦) example pairs. For discrete categorical features we create a mapping where each unique 𝑥 is mapped to the mean 𝑦. For numerical features, we produce a PWLCurve using the approximation algorithm described in Section 5. If the teacher model contains univariate functions nested within other univariate functions we replace the source functions in a bottom-up fashion. Otherwise, all non-nested functions can be approximated in parallel.

The final model is constructed by replacing each original univariate function with its PWLCurve approximation.

Conventionally, model distillation involves a global optimization using the same (or at least similar) objective to the original teacher model training. This objective may differ from a point-wise MSE objective. For example, ranking objectives often have pair-wise definitions. Why then do we advocate a localized optimization us-ing a MSE objective in all circumstances? The primary answer is that, in the context of interpretable models, there is substantial value in maintaining a strong one-for-one correspondence between each source function and target function. Notably, this allows us to visualize each shape function in the teacher model against its corresponding curve replacement. Additionally, we can attribute distillation failures - data instances where the curve model is less accurate than the teacher model - to specific univariate functions, and to take remedial actions. For example, in the Figure 5 we can immediately tell that the shape function of 𝑥1 was not well-approximated by a curve. In the experiments section we show that the meaningful behavior of nearly all shape functions can be accurately captured by curves with three to five segments. Furthermore, when the meaningful behavior is not captured, it is generally due to inherently non-interpretable behavior being lost.

While a global optimization approach (i.e., optimizing the parameters of all curves in the target model simultaneously) using a problem-specific metric might produce a more accurate result, it is computationally more expensive and would lack the same one-to-one correspondence with the teacher model, making distillation failures more difficult to diagnose. Furthermore, if higher accuracy is desired, the output of the proposed distillation process can be used to initialize a global optimization of the curve model’s parameters.

5      PIECEWISE-LINEAR CURVE APPROXIMATION

Given a univariate numerical function 𝑓 (𝑥) → 𝑦, our goal is to produce a PWLCurve 𝑐 (𝑥) → 𝑦 that faithfully approximates 𝑓 (𝑥) by minimizing the 𝑀𝑆𝐸(𝑐(𝑥), 𝑓 (𝑥)) over sample data. Clearly, the accuracy of the overall distillation method depends critically on the accuracy of the individual curve approximations - i.e. how much metric loss is incurred when each 𝑐 (𝑥) is substituted for the corresponding 𝑓 (𝑥) in the trained model.

Additionally, the practical success of the methodology also de-

pends on the robustness and efficiency of the approximation algorithm. To enable systematic use of curve distillation in model training pipelines, the approximation algorithm must run with minimal configuration. Complex hyperparameters pose a significant barrier to entry. We have designed pwlfit, our piecewise linear approximation algorithm, so that in practice users only need to consider the num_segments and mono (monotonicity) parameters. While num_segments=5 segments and mono=False is sufficient to get high accuracy (as demonstrated by our experiments), it is desirable to investigate whether the model can be further simplified with fewer segments or with monotonicity restrictions. To facilitate such inves-tigations it is important that distillation runs quickly (e.g. less than 1 second per function) which enables interactive analysis via Jupyter notebooks [15] or other tools. These practical considerations have informed various decisions in the design of pwlfit. In particular, we prefer an algorithm which quickly and reliably yields high accuracy results with minimal configuration to one which sacrifices either of these practical considerations for marginal gains in accuracy.

In this section we will describe the salient characteristics and noteworthy features of pwlfit. We invite interested readers to consult the publicly-available source code of pwlfit [22], for additional details.

5.1      Algorithm

Given a list of (𝑥, 𝑦, 𝑤𝑒𝑖𝑔ℎ𝑡) points and a desired number of segments 𝑘, we search for a PWLCurve to minimize mean squared error, MSE. A PWLCurve with 𝑘 segments is characterized by its 𝑘 + 1 control points – a set of 𝑥-knots and their corresponding 𝑦-knots. Given only the 𝑥-knots, we can solve a linear least squares expression for the optimal 𝑦-knots and the resulting error. Since we don’t know the correct 𝑥-knots, we search through the space of possible 𝑥-knots and solve a least squares expression at each step to calculate the error.2

5.1.1       Initial Downsampling. For performance, we randomly down-sample large datasets to approximately one million points before fitting. We downsample to reduce the cost of sorting, which dominates the runtime for large data. This downsampling imposes a negligible quality loss. To further reduce runtime, we discretize the search space for 𝑥-knots. We choose num_samples 𝑥-values from the data, spaced equally by cumulative weight, and search over the combinations of 𝑥-knots from that sampled set of candidates. Using the default 100 samples, our candidates are the 𝑥-values at (0%, 1.01%, . . . , 98.9%, 100%) of the cumulative weight.

5.1.2       Knot Discretization. For data with many repeated 𝑥-values, some of our candidates will be duplicates. For example, 55% of the values in the length_of_stay feature in the COMPAS data set are 0 or 1. In such cases, we iteratively resample at higher rates (such as 0%, 0.505%, 1.01%, etc.) until we collect a suitable number of distinct candidates, never exceeding the specified num_samples parameter.

5.1.3       Condensation. To minimize the cost of each linear least squares step, we condense the data using a novel technique de-scribed in Appendix B. Given num_samples candidate knots, we can condense the full data into two synthetic points per adjacent pair of candidates, for a total of 2 * (num_samples - 1) synthetic points. For any function that’s linear between each adjacent pair of candidate 𝑥-knots, which is guaranteed by our choice of discrete candidate

𝑥-knots, these condensed points perfectly recreate the loss of that function over the full data set. We run our linear least squares solver on the condensed points instead of the full data set, which reduces our cost per solve from O(num_points) to O(num_samples). This is purely a performance optimization, with no quality implications.

5.1.4    Global Optimization via Greedy Search . After discretization, the solution space consists of num_samples num_segments+1  𝑥-knot combinations, which is still too large for an exhaustive search. To make the search tractable we use a greedy search heuristic that optimizes one 𝑥-knot at a time. Specifically, at each step of the process we evaluate the error associated with each candidate 𝑥-knot, and keep the candidate that yields the least error. With this approach, we optimize in two stages. We begin with a single 𝑥-knot as our solution, and greedily add the best remaining candidate 𝑥-knot until our solution consists of (num_segments + 1) 𝑥-knots. Then we cycle through our solution, removing one 𝑥- knot at a time and replacing that 𝑥-knot with the best remaining candidate 𝑥-knot, which could be the same 𝑥-knot that we just removed. We continue this cycle of iterative improvements until our solution converges, or until we’ve exceeded the maximum number of iterations (defaulting to 10 iterations).

5.1.5 Slope Constraints & Monotonicity. pwlfit can impose a minimum and/or maximum slope on the solution via bounded least squares. Instead of solving the least squares expression directly for the 𝑦-knots, we solve it for the deltas between adjacent 𝑦-knots.

Then we impose a min/max slope by bounding the deltas. Slope restrictions can be used to limit the spikiness of curves, but we primarily use them to impose monotonicity. For example, specifying min_slope=0 restricts to monotonically non-decreasing functions while max_slope=0 restricts to monotonically non-increasing functions. Specifying a min_slope greater than 0 or a max_slope less than 0 restricts to strictly increasing or decreasing functions, respectively. pwlfit can deduce the direction of monotonicity by applying iso-tonic regression [26] to the condensed points. We fit an increasing and a decreasing isotonic regression, and use the direction that minimizes mean squared error. The user can override this behavior by specifying the direction explicitly or by disabling monotonicity entirely.

5.1.6  Input Transformations. pwlfit can also interpolate in a transform of feature engineering. pwlfit transforms the 𝑥-values before learning the curve. Specifically, pwlfit will choose a candidate 𝑥-transformation, fx, among log, log1p, or symlog1p based on the range of the 𝑥-values and then proceed with that transformation if it increases the Pearson correlation between fx and 𝑦 by a noticeable amount over the identity transformation. Alternatively, the user can specify any strictly increasing 1D transform or specify the identity transform to disable transformation.

6      EXPERIMENTS

6.1      Distillation Accuracy

Table 1 and Figure 4a show the results obtained from experiments on the different datasets. A complete set of results can be found in Table 2 in Appendix A. The results of applying our distillation technique with our piecewise-linear approximation algorithm are presented as pwlfit. We present results from using various numbers of segments with and without a monotonicity restriction and otherwise default parameters. In all cases we truncated the control points to four significant digits. We also present several additional reference points to provide context.

•   SGD: We directly learn the curves with the Adadelta[31] optimizer. We initialize the 𝑦 values of the control points as zeros. For the 𝑥 values of the control points we use the quantiles for numerical features (e.g. 0%, 50%, 100% for a three point, two segment curve) or all unique values for categorical features. We then apply Adadelta to optimize the 𝑦 values. Simultaneously optimizing 𝑥 and 𝑦 values was also attempted, but the results were always worse than optimizing

𝑦 values alone.

•   NAM: Neural Additive Models (NAMs) [2] is another method for learning interpretable models proposed by Agarwal et al. We present their result for reference where applicable.

•   Interacting forest: We train a bagged, boosted decision forest allowing feature interactions to demonstrate the accuracy of a non-interpretable, high-complexity "black box" model.

•   GAM forest: We train a bagged boosted decision forest GAM by restricting each tree to use only one feature. This model is also the source model for our distillation technique.

•   pwlf: We apply our distillation technique using an alternative piecewise-linear approximation algorithm pwlf[13].

On each dataset we used five fold cross validation and present the metric mean and sample standard deviation across folds. We used three different metrics to evaluate accuracy: AUC-ROC, RMSE, and NDCG@5 for the three different tasks of classification, regression, and ranking. Further details on our experimentation setup can be found in Appendix A and further details on the datasets, labels, and metrics can be found in Preliminaries 2.1.

Our results show that applying our distillation technique with 4-5 segments with pwlfit produces models which are as accurate as both the source GAM forest and NAM models for all datasets except CWS where a small gap remains. We investigate this accuracy gap in detail in Section 6.2 below. In the case of the COMPAS dataset these models are as accurate as full complexity models. Applying our technique with pwlf produces competitive results, albeit less accurate on the MSLR-WEB30K dataset. By contrast, the results show that learning curves directly via SGD is less general. On the FICO and CWS datasets more segments are required to achieve accuracy comparable to the GAM forest models. On the MSLR-WEB30K dataset the accuracy is inferior even with many more segments.

The consistent accuracy of applying our distillation approach with pwlfit on these four datasets and three separate tasks (classification, regression, learning to rank) demonstrates that the process is not sensitive to either the specific data or the top level objective being used.

6.2      Distillation Failures

In Section 3.1 we explained how distillation yielding a model with inferior accuracy warrants further investigation because the purported "failure" can often be attributed to essential yet non-interpretable characteristics of the teacher model not transferring to the student model. The accuracy gap observed on the CWS dataset is an example of this phenomenon. Figure 5 shows the worst two fits from the CWS dataset. The plots have been redacted to maintain the privacy of the dataset. For each plot it is clear that the original teacher submodel had some non-interpretable behavior which was lost during distillation. This is most evident for feature 𝑥1, the worst offender, where the output is highly erratic. If the original teacher submodel is not distilled for these two features then the accuracy gap between the original teacher model and 5 segment non-monotonic distillation drops from 0.0059 to 0.003 (i.e. ~50% of the gap is recovered).

To identify the above two failures we applied the following method.

•   Begin with the original teacher model. For each submodel compute the metric delta against the teacher model from distilling only that submodel and no others.

•   Perform the above on each cross validation fold using the validation set and average the metric deltas across folds.

•   Sort the features by their associated metric delta to determine the worst distillations.

6.3      Efficiency & Robustness

The experiments of the previous section showed that pwlfit more accurately distills the source model across datsets than pwlf. We also found on the MSLR-WEB30K dataset that pwlfit is more efficient and robust than pwlf. Figure 4b shows per fit metrics from the first fold of the MSLR-WEB30K dataset as the number of segments varies without monotonicity. The top plot shows the time in seconds, as measured on a ThinkStation P520, to fit each of the 136 submodels of the source GAM forest. We find that pwlfit is faster in the average case as the number of segments increases, and has a narrower distribution. The bottom plot shows the RMSE of each fit against the 136 submodels of the source GAM forest. We again find that pwlfit performs favorably in the average case with a narrower runtime distribution.

It’s worth noting that pwlf by default does not perform any down-sampling. For the MSLR-WEB30K dataset running pwlf without any downsampling was prohibitively expensive. For all of our experi-ments we ran pwlf with a pre-processing downsample to a random 1000 examples. We found this to be a fair point for balancing speed and quality when comparing to pwlfit. It is of course possible with both algorithms to modify the number of samples used to strike a different trade-off between run time and accuracy.

6.4      Monotonicity

As discussed in Section 5, pwlfit can fit monotonic curves with automatic direction detection. Figure 4a compares curve models fit with and without monotonicity constraints (automatically inferring the direction) across datasets. For the COMPAS dataset monotonic and non monotonic models are comparably accurate, while for FICO, MSLR-WEB30K, and CWS, non-monotonic models are more accurate.

Monotonicity with respect to appropriate features is desirable for interpretable models. In these cases a monotonic model may be preferable to a non-monotonic one, even if it is less accurate. For example, Figure 6 compares monotonic and non-monotonic 5-segment curve models on the FICO dataset for the MSinceMostRecentDelq and PercentTradesWBalance features. Given the semantic meaning of these features, it is desirable from a transparency and incentives standpoint for the model output to be monotonic with respect to each of them.

7      CONCLUSION

We have introduced a novel method for distilling interpretable models into human-readable code using piecewise-linear curves and demonstrated its efficacy on four datasets. We have shown that curve models match or outperform the accuracy achieved by other additive models. On smaller datasets, curve models match the accuracy of more complex models, like interacting decision forests. Our localized distillation methodology is applicable to any model containing univariate numerical functions and is straightforward to implement using the publicly-available pwlfit[22] library.

We have explained how curve model distillation reinforces interpretability and addresses a variety of real-world engineering challenges. Curve models are 1) transparent, 2) well-regularized, 3) easy to analyze for presence of biases or other fairness issues, and 4) can be directly edited or improved outside the context of machine learn-ing to fix the aforementioned fairness issues. Distilling models into human-readable code allows one to address novel machine learning problems using well-established software engineering methods. Curve models can be improved by multiple contributors in parallel, reviewed, and made to systematically follow best practices. Curve models are well-suited for production applications, since they can be natively supported in many languages, are easy to deploy, and fast to evaluate.

ACKNOWLEDGMENTS

We thank Vytenis Sakenas, Jaime Fernandez del Rio, Benoit Zhong, and Petr Mitrichev for their support and providing the algorithms and optimization infrastructure used in our experiments. We also thank Paul Heymann, Diego Federici, Mike Bendersky, Paul Haahr, and Petr Mitrichev for their helpful feedback and detailed reviews. Lastly, we thank Xinyu Qian, Janelle Lee, Po Hu, and Chary Chen for preparing the CWS data set for our experiments.


REFERENCES

[1]  2018. FICO Explainable Machine Learning Challenge. https://community.fico. com/s/explainable-machine-learning-challenge.

[2]  Rishabh Agarwal, Nicholas Frosst, Xuezhou Zhang, Rich Caruana, and Geoffrey E. Hinton. 2020. Neural Additive Models: Interpretable Machine Learning with Neural Nets. arXiv:cs.LG/2004.13912

[3]  Elaine Angelino, Nicholas Larus-Stone, Daniel Alabi, Margo Seltzer, and Cynthia Rudin. 2017. Learning Certifiably Optimal Rule Lists. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’17).

[4]  Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica 23 (2016).

[5]  Cristian Buciluundefined, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model Compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’06). 535–541.

[6]  Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’15). 1721–1730.

[7]  Chaofan Chen, Kangcheng Lin, Cynthia Rudin, Yaron Shaposhnik, Sijia Wang, and Tong Wang. 2018. An Interpretable Model with Globally Consistent Explanations for Credit Risk. arXiv:cs.LG/1811.12615

[8]  Alexandra Chouldechova. 2017. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data 5, 2 (Jun 2017), 153–163. https://doi.org/10.1089/big.2016.0047

[9]  ![Text Box: CDF](file:///C:/Users/user/AppData/Local/Temp/msohtmlclip1/01/clip_image063.gif)Julia Dressel and Hany Farid. 2018. The accuracy, fairness, and limits of predicting recidivism. Science Advances 4, 1 (2018). https://doi.org/10.1126/sciadv.aao5580

[10]  Mengnan Du, Ninghao Liu, and Xia Hu. 2019. Techniques for interpretable machine learning. Commun. ACM 63, 1 (Dec 2019), 68–77. https://doi.org/10. 1145/3359786

[11]  Trevor Hastie and Robert Tibshirani. 1986. Generalized Additive Models. Statist. Sci. 1, 3 (08 1986), 297–310.

[12]  Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the Knowledge in a Neural Network. arXiv:stat.ML/1503.02531

[13]  Charles F. Jekel and Gerhard Venter. 2019. pwlf: A Python Library for Fitting 1D Continuous Piecewise Linear Functions. https://github.com/cjekel/piecewise_ linear_fit_py

[14]  Jon Kleinberg. 2018. Inherent Trade-Offs in Algorithmic Fairness. In Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS ’18).

[15]  Thomas Kluyver, Benjamin Ragan-Kelley, Fernando Pérez, Brian Granger, Matthias Bussonnier, Jonathan Frederic, Kyle Kelley, Jessica Hamrick, Jason Grout, Sylvain Corlay, Paul Ivanov, Damián Avila, Safia Abdalla, and Carol Willing. 2016. Jupyter Notebooks – a publishing format for reproducible computational workflows. In Positioning and Power in Academic Publishing: Players, Agents and Agendas, F. Loizides and B. Schmidt (Eds.). IOS Press, 87 – 90.

[16]  ![Text Box: CDF](file:///C:/Users/user/AppData/Local/Temp/msohtmlclip1/01/clip_image072.gif)A. K. Kolmogorov. 1957. On the Representation of Continuous Functions of Several Variables by Superposition of Continuous Functions of One Variable and Addition. Doklady Akademii Nauk SSSR 114 (1957), 369–373.

[17]  Yin Lou, Rich Caruana, and Johannes Gehrke. 2012. Intelligible Models for Classification and Regression. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’12). 150–158.

[18]  Yin Lou, Rich Caruana, Johannes Gehrke, and Giles Hooker. 2013. Accurate Intelligible Models with Pairwise Interactions. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’13). 623–631.

[19]  Tao Qin and Tie-Yan Liu. 2013. Introducing LETOR 4.0 Datasets. CoRR abs/1306.2597 (2013). http://arxiv.org/abs/1306.2597

[20]  Tao Qin, Tie-Yan Liu, and Hang Li. 2010. A general approximation framework for direct optimization of information retrieval measures. Information retrieval 13, 4 (2010), 375–397.

[21]  Cynthia Rudin. 2018. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. arXiv:stat.ML/1811.10154

[22]  Ethan Sterling and Walker Ravina. 2019. pwlfit: A Piecewise-Linear Curve Fitting Library. https://github.com/google/pwlfit

[23]  Sarah Tan, Rich Caruana, Giles Hooker, and Yin Lou. 2018. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’18). 303–310.

[24]  Berk Ustun and Cynthia Rudin. 2014. Methods and Models for Interpretable Linear Classification. arXiv:stat.ME/1405.4047

[25]  Wikipedia. 2020. Bhatia–Davis inequality. http://en.wikipedia.org/w/index.php? title=Bhatia%E2%80%93Davis%20inequality&oldid=875899600. [Online; accessed 03-September-2020].

[26]  Wikipedia. 2020. Isotonic regression. http://en.wikipedia.org/w/index.php?title= Isotonic%20regression&oldid=989717822. [Online; accessed 30-November-2020].

[27]  Wikipedia. 2020.             Kolmogorov–Arnold representation theorem. http://en.wikipedia.org/w/index.php?title=Kolmogorov%E2%80%93Arnold% 20representation%20theorem&oldid=964097101. [Online; accessed 10-August-2020].

[28]  Wikipedia. 2020. Minimum description length. http://en.wikipedia.org/w/index. php?title=Minimum%20description%20length&oldid=965620302. [Online; ac-cessed 12-August-2020].

[29]  Wikipedia. 2020. Nothing-up-my-sleeve number. http://en.wikipedia.org/w/ index.php?title=Nothing-up-my-sleeve%20number&oldid=972510276. [Online; accessed 12-August-2020].

[30]  Wikipedia. 2020. Segmented regression. http://en.wikipedia.org/w/index.php? title=Segmented%20regression&oldid=910888930. [Online; accessed 10-August-2020].

[31]  Matthew D. Zeiler. 2012. ADADELTA: An Adaptive Learning Rate Method. CoRR abs/1212.5701 (2012). arXiv:1212.5701 http://arxiv.org/abs/1212.5701

[32]  Honglei Zhuang, Xuanhui Wang, Michael Bendersky, Alexander Grushetsky, Yonghui Wu, Petr Mitrichev, Ethan Sterling, Nathan Bell, Walker Ravina, and Hai Qian. 2021. Interpretable Ranking with Generalized Additive Models. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining (WSDM ’21). to appear.


A       EXPERIMENTAL DETAILS

A.1        Cross Validation

We performed 5-fold cross validation on all datasets.

•   COMPAS & FICO: The datasets were split into 5 equal parts. Each part was used once as a test set (20%) with the remaining parts as the training set (80%). We used the same random folds as in the NAM paper [2]. No validation set was used given the small size of the data. Instead we used out of bag evaluation wherever a validation set would be used (see below).

•   MSLR-WEB30K: We used the predefined folds and partitions from the original dataset. For each fold it allocates 60% for training 20% for validation and 20% for testing.

•   CWS: We used a dataset of 60,000 queries and 2,690,439 items with an average of ~44 items per query. The dataset was split into 5 equal parts. Each part was used once as a test set. Of the remaining parts 80% was used as training and 20% as validation. Overall this resulted in 64% for training, 16% for validation and 20% for test for each fold.

A.2        Ensemble Learning

For both SGD and tree models, we trained ensembles with 56 bags using a bag fraction of 7 to produce the random subsets. For MSLR-WEB30K and CWS, queries were randomly divided into bags. For the other datasets, individual examples were randomly divided into bags. When applying our distillation technique, we distilled the ensembles into a single PWLCurve per feature. When learning the curves directly via SGD, we averaged the learned 𝑦-coordinate values across bags to obtain the final model.

A.3        Loss Functions

We trained SGD and tree models using log-loss for the COMPAS dataset, mean squared error (MSE) for the FICO dataset, and ranking loss (ApproxNDCG [20]) for the MSLR-WEB30K and CWS datasets.

A.4        Hyper-parameters

For the COMPAS, and FICO datasets hyper-parameters were tuned using out of bag evaluation on the training set of the first fold. For MSLR-WEB30K and CWS, we used the validation sets of the first fold.

•   SGD: We tuned the batch size in {128, 256, 512, 1024, 4096}. We used the Adadelta [31] optimizer and tuned a sufficient maximum number of steps for convergence. No other pa-rameters were tuned.

•   Interacting forest: We trained depth 5 trees using an inter-nal boosted forest algorithm. We tuned a sufficient maximum number of steps for convergence. No other parameters were tuned.

•   GAM forest: We trained depth 3 trees restricted to using a single feature with an internal boosted forest algorithm. We tuned a sufficient maximum number of steps for convergence. No other parameters were tuned.

In all cases, we trained models for the tuned maximum number of steps and then truncated models after training. Truncation used a confidence-based truncation algorithm which attempts to select the earliest step for which no later step provides a confident win. This algorithm was run on the validation set if present or otherwise utilized out of bag evaluation.

A.5        Code

The GitHub repository for pwlfit [22] contains several Jupyter note-books applying our distillation technique and performing the analyses shown in this paper. Please reference the v0.2.0 release to get the accompanying data files and appropriate version of the Jupyter notebooks.

B        LINEAR CONDENSE

Linear condensing is a data optimization designed to reduce the runtime complexity of our piecewise-linear curve fitting.

B.1        Motivation/Overview

pwlfit picks a set of candidate 𝑥-knots and searches through combinations of those 𝑥-knots. For each combination considered, it solves a linear least squares expression for the ideal 𝑦-knots, calculates the resulting squared error, and prefers the combination that yields the lowest error.

Each solve is linear in the size of input, which is slow for large data. We could downsample to save compute at the cost of accuracy. Instead, we introduce a technique to save compute at no cost in accuracy. We condense the data into O(#𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠) synthetic points. These synthetic points perfectly recreate the true squared error over the full data for every PWLCurve that will be considered. We then optimize over the synthetic points instead of the real points.

This is possible because we know the candidate 𝑥-knots ahead of time. A PWLCurve defined on those 𝑥-knots will always be linear between any adjacent 𝑥-knots in the set of candidates. As we show in the theorem, we can condense arbitrarily many points down to two points such that linear fits are the same on those two points as on the full set. In the corollary, we apply this process separately between each pair of candidate 𝑥-knots, producing two points between each pair. Together, the squared error of such a PWLCurve is the same on those synthetic points as it is on the full data set. (Up to a constant that we safely ignore because it’s the same for each

PWLCurve.)

B.2        Definitions

For convenience, we take standard definitions and specialize them for weighted 2D points.

Definition B.1. Let a ‘point’ refer to a real-valued triple of the form (𝑥, 𝑦, 𝑤𝑒𝑖𝑔ℎ𝑡) with 𝑤𝑒𝑖𝑔ℎ𝑡 > 0.

Definition B.2. Let a ‘line’ refer to a function of the form 𝑓 (𝑥) =𝑚𝑥 + 𝑏 for 𝑚, 𝑏, 𝑥 ∈ R.

Definition B.3. For any function 𝑓 : R → R and finite point set 𝑃, define the squared error 𝑆𝐸( 𝑓 , 𝑃) as the sum of ( 𝑓 (𝑥) −𝑦)2 · 𝑤𝑒𝑖𝑔ℎ𝑡 for each point in 𝑃. If 𝑃 is empty, we consider the squared error to be 0.

Definition B.4. For any finite point set 𝑃, define the ‘best fit line’ 𝑏𝑒𝑠𝑡 𝑓 𝑖𝑡𝑙𝑖𝑛𝑒 (𝑃) as the line 𝐿 that minimizes 𝑆𝐸(𝐿, 𝑃). In the degenerate case where multiple lines minimize 𝑆𝐸, let the best fit line be the solution with zero slope, and if multiple solutions have zero slope, let the best fit line be the solution with a zero 𝑦-intercept.

There are two degenerate cases that require tie-breaking. If the point set is empty, every line has the same squared error, so our definition chooses 𝑓 (𝑥) = 0 as the best fit line. If the point set is nonempty but all its points have the same 𝑥, then any line with the correct value at 𝑥 will minimize the squared error, so our definition choose the horizontal line.

B.3        Theorem

Theorem B.5. Given a set of points 𝑃 , we can construct a set 𝑃 ′ of two or fewer points such that

Remark. These properties are desirable because (2) allows us to compute the squared error of 𝑀 lines over a data set of 𝑁 points in O(𝑁 + 𝑀) instead of the naive O(𝑁 𝑀), and (1) allows us to extend this property from lines to a useful class of piecewise-linear curves in the corollary.

Note that the points in 𝑃 ′ are constructed, rather than chosen from 𝑃. The construction of 𝑃 ′ is implemented in pwlfit [22] as

linear_condense.linear_condense.

Proof. Let 𝑋, 𝑌 , and 𝑊 represent the 𝑥, 𝑦, and 𝑤𝑒𝑖𝑔ℎ𝑡 values of 𝑃, respectively. We dismiss the trivial case where 𝑃 is empty; in that case, an empty 𝑃 ′ satisfies the requirements. Likewise, we dismiss the case where 𝑚𝑖𝑛(𝑋 ) = 𝑚𝑎𝑥 (𝑋 ) since 𝑃 ′ = {𝐶𝑒𝑛𝑡𝑟𝑜𝑖𝑑 (𝑃)} fulfills our desired properties. With those cases resolved, we assume for the rest of this proof that 𝑚𝑖𝑛(𝑋 ) < 𝑚𝑎𝑥 (𝑋).

B.3.1      Reframe the Coordinate System. To begin, we reframe the coordinate system such that the origin is the centroid of 𝑃 and 𝑦 = 0 is the best fit line. (This simplifies the math.) We ensure that the shift of coordinates is reversible and preserves the squared error.

𝐶𝑒𝑛𝑡𝑟𝑜𝑖𝑑 (𝑃) = (𝑋 · 𝑊 /𝑠𝑢𝑚(𝑊 ), 𝑌 · 𝑊 /𝑠𝑢𝑚(𝑊 )). We translate the coordinate frame by this centroid so that, under the new coordinates, 𝐶𝑒𝑛𝑡𝑟𝑜𝑖𝑑 (𝑃) = (0, 0). After translation, 𝑋 · 𝑊 = 0 and 𝑌 ·𝑊 = 0.

Additionally, we skew the coordinate system by the slope of the best fit line: we replace 𝑌 with 𝑌 − 𝑋 · 𝑠𝑙𝑜𝑝𝑒 (𝑏𝑒𝑠𝑡 𝑓 𝑖𝑡𝑙𝑖𝑛𝑒 (𝑃)). With the centroid at the origin, the slope of the best fit line is 𝐶𝑜𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 (𝑋, 𝑌,𝑊 )/𝑉 𝑎𝑟𝑖𝑎𝑛𝑐𝑒 (𝑋,𝑊 ) =

Under the new coordinate frame, 𝑆𝐸(𝑏𝑒𝑠𝑡 𝑓 𝑖𝑡𝑙𝑖𝑛𝑒 (𝑃), 𝑃) = 𝑆𝐸(𝑦 = 0, 𝑃) = 𝑌 2 ·W

We will determine 𝑃 ′ under this new coordinate system. Afterwards, we can easily convert 𝑃 ′ back to the original coordinate system by reversing the skew and the translation.

B.3.2 Squared Error of an arbitrary line. We will express 𝑆𝐸(𝑙𝑖𝑛𝑒, 𝑃) as 𝑆𝐸(𝑏𝑒𝑠𝑡 𝑓 𝑖𝑡𝑙𝑖𝑛𝑒 (𝑃), 𝑃) plus leftover terms. From that, we will derive a 𝑃 ′ such that 𝑆𝐸(𝑙𝑖𝑛𝑒, 𝑃 ′ ) equals those leftover terms.

For an arbitrary line 𝑦 = 𝑚𝑥 + 𝑏,

𝑆𝐸(𝑦 = 𝑚𝑥 + 𝑏, 𝑃) = (𝑚𝑋 + 𝑏 − 𝑌) 2 · 𝑊 = (𝑚2𝑋 2 + 2𝑏𝑚𝑋 − 2𝑚𝑋𝑌 + 𝑏 2 − 2𝑏𝑌 + 𝑌 2 ) ·𝑊 .

In our coordinate frame, 𝑋 ·𝑊 = 0, 𝑌 ·𝑊 = 0, and 𝑋𝑌 ·𝑊 = 0. So 𝑆𝐸(𝑦 = 𝑚𝑥 + 𝑏, 𝑃) = (𝑚2𝑋 2 + 𝑏 2 + 𝑌 2 ) ·𝑊 .

𝑌 2 ·𝑊 = 𝑆𝐸(𝑏𝑒𝑠𝑡 𝑓 𝑖𝑡𝑙𝑖𝑛𝑒 (𝑃), 𝑃). Therefore,

𝑆𝐸(𝑦 = 𝑚𝑥 + 𝑏, 𝑃) = 𝑚 2𝑋 2 ·𝑊 + 𝑏 2 ·𝑊 + 𝑆𝐸(𝑏𝑒𝑠𝑡 𝑓 𝑖𝑡𝑙𝑖𝑛𝑒 (𝑃), 𝑃).

𝑚 2𝑋 2 ·𝑊 + 𝑏 2 ·𝑊 = 𝑆𝐸(𝑦 = 𝑚𝑥 + 𝑏, 𝑃) − 𝑆𝐸(𝑏𝑒𝑠𝑡 𝑓 𝑖𝑡𝑙𝑖𝑛𝑒 (𝑃), 𝑃).

B.3.3 Squared error over 𝑃 ′ .

𝑆𝐸(𝑦 = 𝑚𝑥 + 𝑏, 𝑃 ′ ) = 𝑆𝐸(𝑦 = 𝑚𝑥 + 𝑏, 𝑃) − 𝑆𝐸(𝑏𝑒𝑠𝑡 𝑓 𝑖𝑡𝑙𝑖𝑛𝑒 (𝑃), 𝑃)

for all lines𝑦 = 𝑚𝑥+𝑏 ⇐⇒ (𝑚𝑋 ′+𝑏−𝑌 ′ ) 2 ·𝑊 ′ = 𝑚2𝑋 2 ·𝑊 +𝑏 2 ·𝑊 for all lines 𝑦 = 𝑚𝑥 + 𝑏.

The above equation can be viewed as a quadratic polynomial in the two variables 𝑚 and 𝑏. To hold for all values of 𝑚 and 𝑏, the coefficients of each 𝑚𝑐 𝑏 𝑑 must be equal on both sides of the equation. Then the equation holds iff:

1. 𝑋 ′2 ·𝑊 ′ = 𝑋 2 ·𝑊 , and 2. 𝑋 ′ ·𝑊 ′ = 0, and 3. 𝑠𝑢𝑚(𝑊 ) = 𝑠𝑢𝑚(𝑊 ′ ), and 4. 𝑌 ′ ·𝑊 ′ = 0, and 5. 𝑌 ′2 ·𝑊 ′ = 0, and 6. 𝑋 ′𝑌 ′ ·𝑊 ′ = 0. (5) ⇐⇒ 𝑌 ′ = 0, which also guarantees (4) and (6). We will use 1-3 to derive a satisfactory 𝑋 ′ and 𝑊 ′ .

B.3.4 Deriving 𝑋 ′ and 𝑊 ′ . We’ve determined that 𝑌 ′ = 0.

Let 𝑋 ′ := (𝑥1, 𝑥2) and 𝑊 ′ := (𝑤1,𝑤2). Without loss of generality, let 𝑥1 <= 𝑥2. Then, to satisfy our squared error expression, it’s necessary and sufficient that:

Because we have three equations in four unknowns, we cannot directly solve for 𝑥1, 𝑥2,𝑤1,𝑤2. To produce a fourth equation, we choose the constraint that 𝑥1/𝑥2 = 𝑚𝑖𝑛(𝑋)/𝑚𝑎𝑥 (𝑋). This choice will simplify the math, and will ensure that 𝑚𝑖𝑛(𝑋) <= 𝑥1 <= 𝑥2 <= 𝑚𝑎𝑥 (𝑋).

With this fourth equation, we solve the simultaneous equations to produce:

Note that, because the centroid is zero, 𝑚𝑖𝑛(𝑋) < 0 < 𝑚𝑎𝑥 (𝑋), so these expressions are all defined. (The denominators are never 0 and values beneath the square roots are never negative.)

𝑃 ′ = (𝑥1, 0,𝑤1), (𝑥2, 0,𝑤2) satisfies our requirements.

B.3.5 Verify that 𝑚𝑖𝑛(𝑋) <= 𝑥1 <= 𝑥2 <= 𝑚𝑎𝑥 (𝑋). We wanted 𝑃 ′ to satisfy the the squared error expression, which it does, and also have its x-values bounded by the x-values of 𝑃, which we prove now. Let 𝜇 := 𝐸(𝑋,𝑊 ), the expected value of 𝑋 weighted by 𝑊 , which is equivalent to the x-value of 𝐶𝑒𝑛𝑡𝑟𝑜𝑖𝑑 (𝑃). By the Bhatia–Davis inequality [25], 𝑠𝑡𝑑𝑑𝑒𝑣(𝑋,𝑊 ) 2 <= (𝜇 −𝑚𝑖𝑛(𝑋)) (𝑚𝑎𝑥 (𝑋) − 𝜇). (This inequality is equivalent to the observation that the standard deviation of a distribution is maximized when all the xs are at the extremes – i.e. equal to min(X) or max(X).)

Since 𝜇 is zero for 𝑃, 𝑠𝑡𝑑𝑑𝑒𝑣(𝑋,𝑊 ) 2 <= −𝑚𝑖𝑛(𝑋)𝑚𝑎𝑥 (𝑋).

B.4        Corollary

Corollary B.6. Given a set of points 𝑃 and a set of x-knots 𝐾, we can construct a set of points 𝑃 ′ with |𝑃 ′ | <= 2(|𝐾| − 1) such that, for any PWLCurve 𝐶 whose x-knots are elements of 𝐾, 𝑆𝐸(𝐶, 𝑃) = 𝑆𝐸(𝐶, 𝑃 ′ ) + 𝑐, where 𝑐 is a constant determined exclusively by 𝑃 and 𝐾 that’s the same for every 𝐶.

Note that the points in 𝑃 ′ are constructed, rather than chosen from 𝑃. The construction of 𝑃 ′ is implemented in pwlfit [22] as linear_condense.condense_around_knots.

B.4.1 Preprocess 𝑃 by clamping. Let 𝑘 := |𝐾|, and consider 𝐾 in sorted order. Piecewise-linear curves are constant for input values that exceed the range of their x-knots, so 𝐶 is constant for 𝑥 <= 𝑚𝑖𝑛(𝐾) = 𝐾[0] and for 𝑥 >= 𝑚𝑎𝑥 (𝐾) = 𝐾[𝑘 − 1].

Therefore we can clamp the x-values of 𝑃 to [𝐾[0], 𝐾[𝑘 − 1]] without altering 𝑆𝐸(𝐶, 𝑃). We do so as a preprocess.

B.4.2 Partition 𝑃 by 𝐾. To construct 𝑃 ′ from 𝑃, we first partition 𝑃 by 𝐾 into 𝑘 + 1 disjoint pieces labeled 𝑃0, 𝑃1, ..., 𝑃𝑘 .

Because we clamped 𝑃, 𝑃0 and 𝑃𝑘 are empty. Therefore Ð𝑘−1 𝑖=1 𝑃𝑖 = 𝑃.

A PWLCurve is linear between consecutive control points, so 𝐶 is linear over each 𝑃𝑖 .

B.4.3 Convert each partition into two points. From the theorem, for each 𝑃𝑖 , we can produce a two-point set 𝑃 ′ 𝑖 with 𝑚𝑖𝑛𝑥 (𝑃𝑖) <= 𝑚𝑖𝑛𝑥 (𝑃 ′ 𝑖 ) <= 𝑚𝑎𝑥𝑥 (𝑃 ′ 𝑖 ) <= 𝑚𝑎𝑥𝑥 (𝑃𝑖), such that for any line 𝐿,

This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.