Statistical model selection problems arises in diverse areas. Some of the selection methods have exponential complexities and thus, are computationally demanding. The purpose of this thesis is to propose computationally efficient and numerical reliable algorithms used in statistical model selection. Particular emphasis is given to the computationally intensive model selection strategies which evaluate regression trees and have combinatorial solutions. The computational efficiency of the proposed algorithms has been investigated by detailed complexity analysis. Parallel algorithms to compute all possible subset regression models are designed, implemented and analyzed. A branch-and-bound strategy that computes the best-subset regression models corresponding to each number of variables is proposed. A heuristic version of this strategy is developed. It is based on a tolerance parameter when deciding to cut a subtree. Experimental results which support the theoretical results of the new strategies are shown. The adaptation of the various regression tree strategies to subset Vector Autoregressive model selection problems is pursued. Various special cases for subset selection which exploit the common columns of the data matrices and the Kronecker structure of the variance-covariance matrix are investigated. Within this context, the design of a combinatorial algorithm to compute efficiently the estimators of a seemingly unrelated regressions model with permuted exogenous data matrices is designed. The algorithms developed in this thesis have as a main computational component the QR decomposition and its modification. Efficient strategies to compute the various matrix factorization problems which arise in the estimation procedures are designed. The non-dense structure of the matrices is exploited, Kronecker products are not explicitely computed and computation of matrix inverses is avoided