*Syntactic Theory*

*Syntactic Theory*

< How to draw AVMs in a common word processor | Feature Structures | Unification >

# Correspondence between Feature Graphs and AVMs

## From AVMs to Feature Graphs

AVMs and feature structures are the two sides of the same coin: whereas AVMs are the entities that the linguists use to depict feature structures, the *real* feature structures are well-understood mathematical entities to which various results of graph-theory can be applied.

In the conversion of an AVM to a feature graph, tags become nodes, features become outgoing edges:

## From Feature Graphs to AVMs

The reverse is trickier, because AVMs have more freedom than feature structures. This is due to the fact that multiple occurences of the same variable can be explicated along with any of the instances of the variable. For this reason it is necessary to introduce the notion of *arborescence* first. If a feature graph has reentrancies, it can be represented by more than one tree.

This figure has the following two trees:

The mapping from a feature graph to an AVM then goes as follows:

- drop one incoming edge (end up with a tree)
- define a set of tags. There should be a correspondence between nodes in the graph and variables in the AVM

Note: nodes of a feature graph are part of the definition of the graph, AVMs are defined over a universal set of variables. Therefore: predefine a set of variables for each AVM to serve as tags.

- work yourself backwards from the leaves up to the root
- when done: restore the edges you had to get rid of in the first step

Let's return to our example from arborescence ...

... and choose one of the trees....

...and turn it into an AVM according to our steps:

Remember: when a feature graph has several different arborescences, it will have just as many AVM expressions. These, however, are not arbitrarily different, but are, in fact, renamings of each other.