Segmentation of anatomical structures has been traditionally formulated as a perceptual grouping task, and solved through clustering and variational approaches. However, such strategies require the a priori knowledge to be explicitly defined in the optimization criterion, e.g., ``high-gradient border'', ``smoothness'', or ``similar intensity or texture''. This approach is limited by the validity of underlying assumptions and cannot capture complex structure appearance. We introduce database-guided segmentation as a new data-driven paradigm that directly exploits expert annotation of interest structures in large medical databases. Segmentation is formulated as a two-step learning problem. The first step is structure detection where we learn how to discriminate between the object of interest and background. The resulting classifier based on a boosted cascade of simple features also provides a global rigid transformation of the structure. The second step is shape inference where we use a sample-based representation of the joint distribution of appearance and shape annotations. To learn the association between the complex appearance and shape we propose a feature selection mechanism and the corresponding metric. We show that the selected features are better than using directly the appearance. The proposed method has a wide range of applications and performance is illustrated on completely automatic cardiac quantification in 2D and 3D ultrasound and automatic fetal measurements in 2D ultrasound.