A bit over a century ago, the scientific community had decided what a crystal was. A crystal was a material whose atomic or molecular arrangement (this was the same era during which atoms and molecules were finally accepted) repeated periodically in three axial directions. Sir William Bragg and his son developed x-ray crystallography, and crystallographers could develop good descriptions of what these repeating “unit cells” looked like.

This was probably a necessary step. Socrates would say that if we are going to study crystals, we must first decide what a “crystal” is.

Socrates’ is not a universal sentiment. To paraphrase Ludwig Wittgenstein, to teach a student what a crystal is, one presents the student with a diamond and say, “crystal”, and then with a large salt cube and say, “crystal”, and then with a lump of amethyst and say, “crystal”, and then the student starts getting the idea. In real life, Wittgenstein is right: definitions (and food fights over definitions) emerge from catalogues of examples and counterexamples.

That does not mean that definitions are a waste of the taxpayer’s money. Consider my current obsession: predicting crystals. Crystal prediction requires software, software requires theory, and theory requires definition. If one is to predict crystals, one needs to know *precisely* what crystals are. For crystals (as they were understood over most of the Twentieth century), one will probably wind up doing a variant of one of the following:

- Design a crystal by assembling a structure within the space of a unit cell. One takes a generic parallelopiped, with side (vectors) labeled
**x**,**y**, and**z**, as in this picture…

…and then one*identifies*the three pairs of opposing faces of the unit cell, so that a fly buzzing into one face will then buzz out of the opposing face in the same direction. Within this unit cell, one assembles a structure, possibly adjusting the shape of the cell (i.e. adjusting**x**,**y**and**z**) en route. (References for this sort of topology includes Michael Henle’s Combinatorial Course in Topology and Hajime Sato’s Algebraic Topology : An Intuitive Approach.) - Assembling a structure by taking some kind of fragment or collection of fragments, and then attaching them one to another to another, all monitored by a device that can recognize when a unit cell or equivalent has been assembled. (References for this sort of group theory include John Meier’s Groups, Graphs and Trees.) This is the approach I proposed in my presentation to the MathCryst commission.

(All this also requires linear algebra – see, e.g. Hoffman & Kunze’s Linear Algebra – and abstract algebra – see, e.g. Israel Herstein’s Topics in Algebra.)

Both of the above approaches presumes a definition of “crystal” that is somewhat like this:

*A crystal is a material composed of a finite number of types of constituents, and whose structure admits a symmetry from any constituent to any other constituent of the same type*.

This is the fundamental classical definition based on nanoscopic structure, and it is the one that a mathematician might start with. But this definition is not *the* definition that emerged from Eighteenth and Nineteenth centuries and held sway until the 1980s. For the more popular definition, I’ll quote from Charles Kittel’s *Introduction to Solid State Physics (2nd ed.)*:

*A perfect crystal is considered to be constructed by the infinite regular repetition in space of identical structural units or building blocks*.

This definition is at least as old as Kepler, and may go back to the Greek atomists. Mathematically, these two definitions are equivalent, a fact that one might regard as the Fundamental Theorem of [Classical] Mathematical Crystallography: *A material is composed of a finite number of types of constituents such that its structure admits a symmetry from any constituent to any other constituent of the same type* if and only if *it is constructed by the infinite regular repetition in space of identical structural units or building blocks*.

Yet Kittel was typical in starting with the second definition, and the first definition – if it is mentioned at all – is mentioned as a rationale for the second. In practice, these two classical definitions above are quite different.

- The first definition arises from the apparent
*homogeneity*of crystals, that is, it is about an observable property of crystals. Thus it is somewhat like what computer scientists call a*specification*: given a crystal, this is the “spec” that it has to satisfy. A specification may not say very much about what the object*is*so much as how it behaves. - The second definition is closer to what applied mathematicians call a
*model*. It is both*descriptive*(giving a better idea of how to recognize a crystal if you encounter one) and*prescriptive*(giving a better idea of how to construct one, if only out of styrofoam balls and toothpicks).

(Of course, the first definition is rather model-ish. We will see more pure specifications in a moment. In general, there is a spectrum from specification-ish to model-ish.)

Perhaps the main theme of the 2014 IUCr Congress is that old definitions have been replaced by new ones, thanks to quasicrystals and the like. Very roughly, the new definitions can be associated with the work of Dan Schechtman (who won the 2011 Nobel Prize in Chemistry) and of Aloysio Janner and Ted Janssen (who shared the 2014 Ewald Prize), respectively:

- From the IUCr Online Dictionary:
*A material is a crystal if it has*(the rest of the entry devoted to what “essentially” means). In Volume C of the IUCr tables, Janner, Janssen, Looijenga-Vos and Wolff restrict this definition to require that “… its diffraction pattern is characterized by a discrete set of resolved Bragg peaks, which can be indexed accordingly by a set of**essentially**a sharp diffraction pattern…*n*integers …”. These definitions are specifications, pure and simple. They impose criteria that must be satisfied in order for an object to be a quasicrystal, but they does not tell us what quasicrystals*are*. - There are a number of mathematical models of crystals. For example…
- One model is the
*cut-and-slice*model, which I can oversimplify as follows. Given an*n*-dimensional lattice*L*, and one creates a*slice*consisting of an*k*-dimensional subspace*S*and a*n- k*– dimensional “window”*W*, to get the*n*-dimensional slice*W*× S. Project all points of*L*in the slice orthogonally onto*S*, and the projected points give the positions of the atoms (see, e.g. Marjorie Senechal’s Quasicrystals and Geometry). - Another popular model is called
*inflation*, which is a higher dimensional (and geometric) analogue of what computer scientists call a*grammar*. A grammar consists of several rules for replacing individual letters with strings of letters. For example, the grammar defined by a → a, a → bab generates strings from a of the form b…bab…b, with equal numbers of b’s on both sides of the a. Notice there is no rule for replacing b’s: in inflation, many sets of*substitution rules*have at least one rule for each letter. One can go beyond strings of letters to geometric shapes, replacing a shape (or tile) with some configuration of several tiles, and then “inflating” the configuration until the new tiles are the same size as the original ones, and repeating (see, e.g. Michael Baake and Uwe Grimm’s Aperiodic Order I: A Mathematical Invitation).

Such models give us visualizations of what a quasicrystal is. Cognitive scientists claim that we think in metaphors, and that is what makes these definitions valuable.

- One model is the

Having a lot of definitions suggests that a field is new and practitioners have not settled on a definition to inflict on students. We can ask mathematicians for a new Fundamental Theorem, but it is possible that the situation is more complicated. For example, in 2000, Jeffrey Lagarias asked eleven questions about the relations between various definitions. At the 2014 IUCr Congress, Lorenzo Sadun (with Johannes Kellendock) announced that the answer to Problem 4.10 was “no”, suggesting that the world is a little more complicated than anticipated. (See also Lagarias’s short paper on these definitions. I’d like to thank Lorenzo for helping me with some of these definitions.)

If we have several definitions, and if they are not equivalent, then we have a problem. Outside of encouraging food fights, definitions provide a methodological anchor. But it would be helpful if we could settle on what the subject of our endeavor is.

Often (as in classical crystallography), the goal is a single widely usable definition. In logic, a notion is *robust* if it is expressible in many different but straightforward ways. In mathematics, a *representation theorem* says that two different notions are actually equivalent. The Fundamental Theorem of [Classical] Mathematical Crystallography is such a representation theorem – and a particularly important one, since it connects a specification with a model. So some of us may hope for a demonstrably robust notion, whose robustness can be demonstrated by a representation theorem.

But that may not be in the cards. Sometimes the universe is messy, and what we really need is a catalog. We may then hope that organizing principles will arise out of the mounds of data, like quarks arising from the heaps of subatomic particles in the early 1960s.

Either way, we don’t seem to be there yet. And that means that the paradigm shift presided over by Shechtman, Janssen and Janner is still underway.