multiTree is a java-based computer program for the analysis of multinomial processing tree models. multiTree provides parameter estimation, hypothesis testing, checks for identifiability, bootstrapping, and power analysis.

multiTree requires at least version 6.0 of the Java Runtime Environment installed on the target machine. The latest version of the Java Runtime Environment can be downloaded free of charge from

Please send questions/comments/suggestions to morten.moshagen[at]

For more details see:
Moshagen, M. (2010). multiTree: A computer program for the analysis of multinomial processing tree models. Behavior Research Methods, 42, 42–54.

View changelog

*Downloading multiTree with Chrome may not work. Please use a different browser.*

 multiTree for Linux

Install: Extract and run 
Download multiTree v046 32bit (~4 MB) 
Download multiTree v046 64bit (~4 MB)

 multiTree for Mac

Install: Mount disk image and run multiTree. 
Download multiTree v046 (~9 MB)

Note: If you run into trouble using the file above, you may try to download the plain multiTree jar file and launch it either with the enclosed shell script (launchMT) or via via the terminal using the command “java -jar -XstartOnFirstThread multitree.jar”

 multiTree for Windows

Install: Launch installer and follow the on-screen instructions. 
Download multiTree v046 32bit (~4 MB) 
Download multiTree v046 64bit (~4 MB)

Note: The multiTree userinterface may look disarranged on high resolution displays. This can be changed by applying a proper scaling to the UI elements as follows: Right-click on multitree.exe, select Properties, select the Compatibility tab, and then select the “Disable display scaling on high DPI settings” (set to System).

Help and Tutorials

Video tutorials:

Model Selection Based on Minimum Description Length (MDL / FIA)

multiTree allows to compute the Fisher information approximation (FIA), a model selection criterion based on the minimum description length principle similar to AIC or BIC. Essentially, FIA allows to make a trade-off between the fit and complexity of the models under consideration. FIA has the advantage that it takes the functional complexity of the models into account (e.g., how the parameters in the MPT model are connected and whether order constraints such as Do>Dn are included).

However, FIA is only an approximation that can fail if the number of observations is too small, leading to severely biased model selection. As a remedy, it should only be used if the total number of observations (usually the numer of responses times the number of participants) exceeds a lower bound. The following Excel sheet allows to compute this lower-bound sample size for FIA: 

    For more details see: Heck, D. W., Moshagen, M., & Erdfelder, E. (2014). Model selection by minimum description length: Lower-bound sample sizes for the Fisher information approximation. Journal of Mathematical Psychology, 60, 29–34. doi:10.1016/


    You may use multiTree free of charge for academic and personal use. If you want to use multiTree for commercial applications, you need to contact the author. Although considerable effort has been put into program development and evaluation, there is no warranty whatsoever.