Automatic sign language recognition (SLR) in image sequences remains an unsolved problem.The variety of signers, backgrounds, sign executions and signer positions makes the developmentof SLR systems very challenging. Current methods try to alleviate this complexity by extractingengineered features to detect hand shapes, hand trajectories and facial expressions as anintermediate step for SLR.Our goal is to approach SLR purely based on feature learning rather than feature engineering.We tackle SLR using the recent advances in the domain of deep learning. In recent years,deep learning achieves state of the art performance in many research domains including imageclassication, speech recognition and human pose estimation. The deep learningmodels that we use are based on convolutional neural networks (ConvNets). A ConvNet isa model with many parameters that are adjusted iteratively using optimization algorithms (=learning) and a large amount of annotated data.In previous work, we showed that deep learning is very successful for gesture recognitionand gesture spotting in videos recorded with a 3D camera (Microsoft Kinect). Our developedsystem is able to recognize 20 dierent Italian gestures (i.e., emblems). We achieved aclassication accuracy of 95.68% in the Chalearn 2014 Looking At People gesture spottingchallenge and attained the fth place out of 35 participants. This gave us an indicationthat deep learning can be useful for SLR.Data is a critical asset to make SLR work with deep learning, which is why we use the FlemishSign Language Corpus1, a linguistic corpus containing a growing number of videoswith Flemish Sign Language signed by deaf people. The corpus is partially annotated with,at the time of writing, more than 22 thousand tagged glosses and more than ve thousandunique glosses.