This paper presents a new approach to solving the Double Dummy Bridge Problem (DDBP). The DDBP is a hard classification task utilized by bridge playing programs which rely on Monte Carlo simulations. The proposed method employs shallow autoencoders (AEs) during an unsupervised pretraining phase and Multilayer Perceptron networks (MLPs) with three hidden layers, built on top of these trained AEs, in the final fine-tuning training. The results are compared with our previous study in which MLPs with similar architectures, but with no use of AEs and pretraining, were employed to solve this task. Several conclusions concerning efficient weight topologies and fine-tuning schemes of the proposed model, as well as interesting weight patterns discovered in the trained networks are presented and explained.
展开▼