I prefer to think of Haskell-like lazy evaluation as constructing a dataflow graph. The expression `map f (sort xs)` constructs a dataflow graph that streams each output of the sort function to `f`, and then printing the result begins running that job. Through that lens, the Haskell program is more like constructing a Spark pipeline. But you can also think of it as just sorting a list then transforming each element with a function. It only makes a difference in resource costs or when there's potential nontermination involved, unless you use unsafe effects (e.g.: unsafePerformIO).Is there a way to think of proofs as being lazy? Yes, but it's not what you think. It's an idea in proof theory called polarization. Some parts of a proof can be called positive or negative. Positive parts roughly correspond to strict, and negative roughly correspond to lazy.
To explain a bit more: Suppose you want to prove all chess games terminate. You start by proving "There is no move in chess that increases the number of pieces on the board." This is a lemma with type `forall m: ChessMove, forall b: BoardState, numPieces b >= numPieces (applyMove m b)`. Suppose you now want to prove that, throughout a game of chess, the amount of material is decreasing. You would do this by inducting over the first lemma, which is essentially the same as using it in a recursive function that takes in a board state and a series of moves, and outputs a proof that the final state does not have more material than the initial state. This is compact, but intrinsically computational. But now you can imagine unrolling that recursive function and getting a different proof that the amount of material is always decreasing: simply write out every possible chess game and check. This is called "cut elimination."
So you can see there's a sense in which every component of a proof is "executable," and you can see whether it executes in a strict or lazy manner. Implications ("If A, then B") are lazy. Conjuctions ("A and B") can be either strict or lazy, depending on how they're used. I'm at the edge of my depth here and can't explain more -- in honesty, I never truly grokked proof polarization.
Conversely, in programming languages, it's not strictly accurate to say that the C program is strict and the Haskell program is lazy. In C, function definitions and macro expansions are lazy. You can have the BAR() macro create a #error, and yet FOO(BAR()) need not create a compile error. In Haskell, bang patterns, primitives like Int#, and the `seq` operator are all strict.
So it's not the case that proofs are lazy and C is strict and Haskell is lazy so it's more like a proof. It's not even accurate to say that C is strict and Haskell is lazy. Within a proof, and within a C and a Haskell program, you can find lazy parts and strict parts.