Despite the promises, deep learning techniques using neural networks, deep learning (DL), are slow to spread for the segmentation of three-dimensional (3D) biomedical images. The first obstacle comes from the difficulty of implementing them: their use with “real” biomedical data often fails because 3D images are much more difficult to manipulate than 2D images. The second obstacle is the lack of comparative analysis with non-DL techniques on experimental datasets to know if there is really an added value in their use. Following a comparative analysis we made on a dataset of isolated 3D nuclei, we show that nnU-Net method reached the best performance even against specialist methods such as Cellpose and Stardist. nnU-Net, yet limited to semantic segmentation, is a very promising method on 3D bio-medical images. nnU-Net automates the choice of many complex parameters that were previously manually adjusted. nnU-Net seems to be able to segment a large spectrum of 3D biomedical images. However, nnU-Net is very difficult to use, so we have completely restructured this method by creating a new modular framework, Biom3d. Biom3d responds to a continuum of user profiles ranging from non-programmers to DL developer and has the same performance as nnU-Net when used in generic mode. Additionally, Biom3d has the potential to exceed nnU-Net by adding or modifying modules. In this workshop, we will explain the theory behind nnU-Net and show how to install and train a 3D semantic segmentation model with the Graphical User Interface of Biom3d on 3D images. Participants will be then invited to try Biom3d with another dataset and another model by exploiting the modularity and easy reconfiguration of Biom3d.