Welcome to the third part of the series!

This post will introduce the basic techniques of programming with functions.

Pure Functions

Functions in a functional language closely resemble functions in mathematics. Let’s call these pure functions. A pure function, whenever called with the same arguments, will return the same result. For example, squareRoot 4 will always return 2, no matter where or when you call it.

To achieve this deterministic behavior, a function must be:

  1. Independent - no dependencies other than its arguments.
  2. Stateless - no state is preserved between calls.

The following C program shows a counter example (literally). The function defined here is neither independent nor stateless.

static int count;

int f(int x) {
  static int y;
  y += 2;
  return x * y;

The function f keeps track of the number of times it was called by updating the global variable count. It also maintains an internal state in the local variable y. This has the effect that two calls to f with the same argument will result in different results.

printf("%d\n", f(20)); /* 40 */
printf("%d\n", f(20)); /* 80 */

But many real-world problems do require maintaining state across calls. The good news is that there are techniques for programming with state without losing the nice properties of pure functions.

Why pure functions are important

Pure functions can be implemented and tested independently of other parts of the system and of its own history (previous calls). This makes it easy to design programs as compositions of functions.

As each function is dependent solely on its inputs, it can be understood on its own, without having to understand the rest of the program. This facility is lost if we introduce shared mutable state. Then the interaction between the functions become more intimate and the program could be understood only as a whole.

As pure functions compute only values and perform no side-effects, its easier to reason about them using simple algebraic and logical techniques.

The lack of side-effects results in a property known as referential transparency. This means “equals can be replaced by equals”. For example, if we know that f a = a + 1, then we can replace all occurrences of f a with a + 1. Referential transparency helps with reasoning about programs. It can also lead to compiler optimizations that may not be possible in imperative languages.

All these properties make the functional paradigm ideal for programming both in the small and the large.

Basic Techniques

The basic techniques for writing functional programs are recursion and higher-order programming.

A recursive function is one that refers to itself either directly or indirectly. In other words a recursive function is defined in terms of itself.

In higher-order programming functions can have other functions as arguments and as results.

These two techniques underlie all advanced functional abstractions and idioms.

In this post we will meet recursion in all its forms. We start by studying the general pattern for writing recursive functions. Then we introduce two extensions to simple recursion - tree recursion (or nonlinear recursion) and mutual recursion.

We will also learn a technique for transforming recursions to iterations, thereby eliminating wastage of stack space.


Let’s start with a very simple recursive function which finds the sum of the first n positive integers.

fun sum n =
    if n = 0 then 0
    else n + sum (n - 1)

The sum function demonstrates the general pattern of all recursive functions.

The first point to keep in mind is that the recursive call should get a value smaller than the current argument. The sum function achieves this by decrementing n by 1. This causes the function to reach a basis, where the argument is sufficiently small that the result can be computed directly. In this example, the basis is reached when n becomes zero.

The conditions where we call the function recursively is known as inductive steps. As we saw in the preceding paragraph, the inductive steps should eventually reach a basis, making sure that the function return a result, instead of infinitely looping.

Now let us spend some time understanding how recursive calls are evaluated. We shall do this with an actual invocation of sum:

sum 5;
(* val it = 15 : int *)

To find the result 15, the following sequence of calls were made:

sum 5;
5 + (sum 4)
5 + 4 + (sum 3)
5 + 4 + 3 + (sum 2)
5 + 4 + 3 + 2 + (sum 1)
5 + 4 + 3 + 2 + (sum 0)
5 + 4 + 3 + 2 + 0

The steps of evaluation reveals that the recursive calls build up a chain of suspended computations. The operations in this chain are actually performed only after the basis case is reached. Until then these operations are held in memory, in the function’s call stack. With deeper recursions, the program will eventually run out of stack space and will raise a runtime error.

Note: For recursive functions defined on integers, you may actually see an integer-overflow error before you reach the stack-overflow error. This largely depends on the SML implementation you are running.

There is a simple technique to make recursive functions to execute in constant stack space. But before we go there, let’s study two extensions to basic recursion.

Tree recursion

The sum function generates a list of suspended computations. There are also recursive functions that can generate a tree of such computations.

Consider a function for computing the sequence of Fibonacci numbers. A number in the Fibonacci sequence is the sum of the preceding two.

The rules for generating Fibonacci numbers is captured by the following recursive function:

fun fib n =
    if n = 0 then 0
    else if n = 1 then 1
    else fib (n - 1) + fib (n - 2)

Consider the call fib 5. To compute fib 5, the two recursive calls branches for computing fib 4 and fib 3. Each of these calls create more “branches”, until the call ends at “leaves” that return 0 and 1. Basically, the call-paths generated by the recursion looks like a tree.

Though the definition of fib is simple and straightforward it is not efficient. It duplicates a lot of work, almost half of it. It uses a number of steps that grows exponentially with the input. It also shares the growing stack problem with the sum function.

Don’t be discouraged with what I just said! We will figure out ways to solve all these problems.

Though tree recursion may not be efficient for numeric computations, it is useful for processing other forms of complex data.

Mutual recursion

Sometimes we need to write two or more functions that are mutually recursive, where each calls at least one other in the group. Many languages place restrictions on defining such functions, requiring forward declarations and so on. SML offers a simple mechanism for declaring mutually recursive functions.

An example of mutual recursion is shown below. These functions checks whether a non-negative integer is even or odd.

fun isEven n =
    if n = 0 then true
    else isOdd (n - 1)
and isOdd n =
    if n = 0 then false
    else isEven (n - 1)

The implementation of these functions are based on the fact that isEven 4 is equivalent to isOdd 3, which in turn is equivalent to isEven 2, and so on down to 0. This is a rather inefficient way to figure out if a number is odd or even, but it demonstrates mutual recursion very well.

The mutually recursive functions are connected together by the and keyword. The keyword fun need to appear only once, at the very beginning of the definitions.

More practical applications of mutual recursion can be found in the implementations of recursive descent parsers and finite-state machines.


As we saw earlier, recursive functions create chains or trees of deferred computations. This means, for large inputs, they may run out of stack space.

It should be possible to fix this problem if recursive functions are defined in a way no deferred computations need to be built-up. Instead we can keep track of the result computed so far in the form of an additional parameter to the function. This extra value is often called an aggregate.

Let us try to apply this idea to the sum function defined earlier. To avoid confusion, we will call the new definition sum2:

fun sum2 (n, s) =
    if n = 0 then s
    else sum2 (n-1, n+s)

The sum2 function will not create a chain of deferred computations, as the result is accumulated in the second component of the argument (s). This has the consequence that, we have to always call sum2 with 0 as the initial value of s.

sum2 (5, 0)
(* val it = 15 : int *)

These are the recursive calls that just happened:

sum2 (5, 0)

sum2 (4, 5)
sum2 (3, 9)
sum2 (2, 12)
sum2 (1, 14)
sum2 (0, 15)

As we can see, the recursive calls to sum2 does not cause the stack to grow. SML guarantees that functions like sum2 will be executed in constant stack space. A language with this property is known as tail recursive. The name derives from the fact that this optimization can be only applied to recursive calls at a tail position in the function’s body. In other words, the recursive call must be the last action performed by the function.

Functions with only tail-recursive calls are described as being iterative. The state of an iteration can be inferred from the arguments of the active call.

We have seen how to transform a linear recursion into iteration by adding an extra state-parameter to the function. The same technique can be applied to functions that generate a tree recursive process. This is exemplified by the following iterative definition of the Fibonacci function:

fun fib2 (a, b, n) =
    if n = 0 then b
    else fib2 (a + b, a, n - 1)

The recursive call to fib2 has been moved to the tail position by adding the pair of integers a and b to the parameter list. While calling the function, these should be initialized to 1 and 0 respectively.

fib2 (1, 0, 10)
(* val it = 55 : int *)

The following transformations are applied repeatedly:

a <- a + b
b <- a

After applying these transformations n times, a and b will be equal, respectively, to fib(n + 1) and fib(n).

Local definitions

There is one problem with the iterative functions defined in the preceding section - they leak an implementation detail in the form of the aggregate parameter. This also makes the functions difficult to use and open up opportunities for bugs.

For instance, a user of the sum2 function must always remember to pass 0 as the second component of the parameter. If by mistake, another value is passed, sum2 will return a wrong result.

The sum2 function fails to provide the right level of abstraction. It should hide the details of how the result is calculated. The user should be able to reason about the function in terms of the parameter n alone.

We can achieve this higher level of abstraction with the help of local definitions. Any variable or function that is part of the implementation detail should be declared locally in the function. This can be achieved by the let .. in .. end expression.

One or more val or fun declarations can appear between let and in. The expression that follows in can refer to these variables. This expression is known as the body of the let. The variables introduced by let .. in will not be visible outside the body.

fun sum n =
    fun sumIter (n, s) =
        if n = 0 then s
        else sumIter (n-1, n+s)
    sumIter (n, 0)

The definition and call to sumIter is an internal implementation detail that makes sure that sum will never lead to a stack overflow error. The user of sum is oblivious to the existence of sumIter.

sum 10000
(* val it = 50005000 : int *)

Exercise 3.1 Write an iterative version of fib in terms of a local fibIter function.


In this post we learned about pure functions and their advantages. We also explored the general patterns for writing recursive and iterative functions.

The next couple of posts will continue to explore functions.

We will look at the second major functional technique, i.e higher-order programming. We will learn about interesting abstractions that can be built by the composition and partial application of functions.

After these, we will study functions that manipulate complex data like lists and records.

See you soon!

Note that name and e-mail are required for posting comments