It's not hard to do, in general.
To build one, you would need to spec out what you wanted in an interface and what you mean by contact.
The real issue comes to usability. The simplest user interface would be "take a picture of your side with your device and let the computer calculate where your figures and the important terrain parts are and automatically adjust for the position of your device (6 dof)". It is also the most time consuming to do. Not hard to do – the algorithms needed are known things – just it would take a lot of time to tune the algorithms to recognize what you want and to not pick up false positives. In theory, it is impossible to not have false positives and negatives in such an application, but in practice you can easily achieve a low level of mismatch between intent and algorithm that you can live with. You would likely need several recognition fingerprint data sets for different figure/terrain combinations. All would need to be tuned.
An easier to maintain option might be "take a pic of the empty board, outline the terrain and two 'calibration points'; during the game take a new pic, tap your units and the calibration points and post your move". This moves a lot of the grunt work to the operator. Not complex grunt work, but repetitive stuff that could get annoying after a while. This, however, would be a very quick to develop and easy to maintain app with a high degree of inherent reliability.
Of course, there are a number of "in between" options, too.
If you need the app to differentiate between different units (this one detects at that range, those have IR and ignore the certain terrain, etc.), then the time, overhead (whether development or in game), and error rates go up.
Again, none of this is "hard" to do. You could google all the software components necessary. What is critical is to scope out your user experience first, then get someone to build it.