Aaron Turon     Archive     Feed

Specialization, coherence, and API evolution

Specialization has been available in nightly Rust for over a year, and we’ve recently been thinking about the steps needed to stabilize it.

There are a couple of implementation issues that are currently blocked on an overhaul of the trait system which should be coming in the next couple of months.

What I want to talk about, though, is some deeper design questions that need to be resolved prior to stabilization, ones involving potential changes to the core specialization rules. This is a story that begins with a bold hope, runs headlong into a tragic discovery, and ultimately ends up close to where it started.

A New Hope

A rite of passage for learning Rust’s trait system is first encountering a coherence error. Coherence is a vital but frustrating property; it guarantees that there is always a single, unambiguous impl of a trait that is used for any given type.

It’s not too hard to see why coherence is vital. Imagine working with a HashMap in which multiple implementations of Hash applied to the key type. If different pieces of code ended up using different impls, map operations would return totally bogus results, and it would be very difficult to track down why.

What makes coherence frustrating is that, to enforce it, we must limit the kinds of impls you can write in different crates. We do this through a pair of rules:

  • The orphan rule, which very roughly says that you can write an impl only if either your crate defined the trait or defined one of the types the impl is for.

  • The overlap rule, which says that a given trait cannot have two impls that both apply to a single type (which would introduce ambiguity about which impl to use), unless one is a specialization of the other.

These rules work closely together; in particular, the orphan rule ensures that sibling crates can’t accidentally define overlapping impls for a parent trait, since their impls must each involve crate-local types.

Prior to Rust 1.0, we iterated quite a bit on these rules, and arrived at the current design in the Rebalancing coherence RFC. That RFC was predicated on a core assumption:

The problem is that due to coherence, the ability to define impls is a zero-sum game: every impl that is legal to add in a child crate is also an impl that a parent crate cannot add without fear of breaking downstream crates.

However, with specialization, it seems this assumption may not longer hold! The point of specialization is to allow for impls to overlap, and then to select the “most specific” impl. That means, in particular, that it’s feasible for a parent and child crate to safely define overlapping impls. Niko wrote a blog post proposing to leverage specialization in just this way.

The fundamental attribute

The existing orphan rule is based on an idea of “fundamental” types. It’s easiest to understand how it works through example:

//// Parent crate ////
trait ParentTrait {
    fn foo(&self);
}

//// Child crate ////
struct ChildType { .. }
impl ParentTrait for Box<ChildType> { .. }

This example is permitted today, and works fine as written. However, it means that adding the following blanket impl to the parent crate is a breaking change (assuming that the new impl is not specializable):

// Add to parent crate
impl<T> ParentTrait for Box<T> {
    // note: not specializable, since we didn't write `default`
    fn foo(&self) { .. }
}

This change would introduce an overlap with the child crate’s impl, which prevents it from compiling.

To avoid all such new parent crate impls being breaking changes, the Rebalancing coherence RFC introduced a restriction on child crates: roughly speaking, their impls of parent traits much either directly reference a type defined in the child crate, or reference it within a “fundamental” type constructor (&, &mut, Box). In other words:

// These child crate impls are allowed:
impl ParentTrait for ChildType { .. }
impl ParentTrait for Box<ChildType> { .. }
impl<'a> ParentTrait for &'a ChildType { .. }

// ... but these impls are NOT allowed:
impl ParentTrait for Vec<ChildType> { .. }
impl ParentTrait for (ChildType, ChildType) { .. }

The idea is to strike a balance between the impls that child crates can have, and the ones that parent crates can add over time. If a parent crate adds a blanket impl involving a fundamental type constructor, that’s a breaking change (since it could overlap with a child crate). But if it adds one for e.g. Vec<T>, that’s fine, because no child crate could have an impl involving that type.

This “fundamental” restriction is an arbitrary line drawn as part of the “zero-sum game” of writing non-overlapping impls. It’s applied using an unstable attribute, #[fundamental], which is difficult to understand and has had no clear path toward stabilization.

A positive-sum game?

But wait. When we introduced specialization, we relaxed the overlap rule to allow for overlap, as long as one impl specialized the other. Since the orphan rule is ultimately about preventing overlap from arising between multiple crates, maybe there’s a way to leverage specialization there as well?

If we revisit the above example, but make the new parent crate impl specializable (by using default), it no longer breaks the child crate. In other words, the following impls can all safely coexist:

//// Parent crate ////
trait ParentTrait {
    fn foo(&self);
}

impl<T> ParentTrait for Box<T> {
    default fn foo(&self) { .. }
}

//// Child crate ////
struct ChildType { .. }

// Now specializes the blanket impl from the parent crate
impl ParentTrait for Box<ChildType> { .. }

Is this always the case? In other words, if you add a new impl in the parent crate and mark it specializable, are you guaranteed not to break any child crates? That would allow us to get rid of fundamental, allowing both parent and client crates to add more kinds of impls than they can today, without breakage!

To make this idea work, you’d need to expand the specialization rules a bit, as Niko explains in his post. But those details won’t be too relevant to the core point of this post.

The Trait System Strikes Back

Half way toward writing up an RFC with the above ideas, trying to prove to myself that they worked, I started to get worried. And then I started trying to prove that they didn’t work. And it turns out they don’t.

The crux of the problem is that the trait system is just too powerful; as we’ve learned over and over again, the trait system can be used to draw sneaky connections that are hard to defend against. Let me show you what I mean.

Imagine we have three crates, arranged in the following way:

//// Crate A ////

trait A {}

// Line we want to add:
// impl<T> A for T {}


//// Crate B ////

trait B {
    type Out;
}

impl<T> B for T where T: A {
    // Note: not specializable
    type Out = ();
}


//// Crate C ////

struct C;

impl B for C {
    type Out = bool;
}

Here, we have two traits in action, linked by the impl in crate B. The problem is that, because the traits are connected in this way, adding the impl in crate A for trait A creates overlap for trait B. And crucially, it’s not enough to require that the impl we added be specializable; the problem is that the existing impl in crate B, which crate A doesn’t even know about, is not specializable.

Like all of our problems around API evolution, this problem boils down to negative reasoning. In particular, crate C is initially allowed to write its impl because it knows, locally, that C does not implement A (and thus the blanket impl in crate B doesn’t apply). The problem is that crate C isn’t the only crate that gets to decide whether it’s type implements A. So it’s a zero-sum game after all, and something like fundemantal is needed to adjudicate between different crates.

You might ask: is it really essential to make specialization opt-in? And indeed, if all impls were specializable, the idea would’ve worked out.

However, it’s not a tenable option. Associated types, in particular, need to opt in to specialization, and these days every method has an implicit associated type.

Return of the Subset Rule

So where does that lead us?

In the short term, it means that we should stick with the subset rule as-is. (I didn’t go into detail here, but Niko, Withoutboats and I had been playing with some changes to support the idea of getting rid of fundamental, which would also be breaking changes to the specialization rule; we’re backing off on that).

In the long term, there are several extensions we’d like to explore for specialization. While these extensions won’t let us relax the orphan rule, they will allow for more kinds of overlap, thereby making specialization usable in more contexts.

  • Intersection impls, which allow arbitrary partial overlap between impls as long as there’s an additional impl that covers precisely the area of overlap (thereby avoiding any ambiguity).

  • Type structure precedence, which considers more specific type structure before considering where clauses. While the idea was originally motivated by relaxing the orphan rules, which we can’t do, it is still a potentially useful expansion of specialization.

  • Child trumps parent, in which a child crate impl always specializes any parent crate impl it overlaps with. This rule is particularly simple and is usually what you want when such overlap arises.

The good news is that:

  • The current subset rule is forwards-compatible with all of these extensions.
  • Moreover, these extensions are compatible with each other!

I think it’s probably worth ultimately landing all three extensions under separate feature gates, to gain experience and determine which ones are most useful. They are all pretty straightforward to implement.

In the meantime, though, we can press forward with stabilization of today’s specialization, as soon as the remaining implementation issues are resolved.