Over the last decades many super-resolution techniques have been developed, each having its advantages and drawbacks (specific dyes, specific mounting medium, complex optic devices..). Since few years, some techniques, among which SRRF (Super-resolution radial fluctuations) is the most well known, makes use of the fluorescence fluctuations to improve the spatial resolution of the observed data. We present in this workshop FluoGAN an unsupervised (i.e. based only on the observation of a fluctuating temporal sequence of 2D blurred and low-resolution data) hybrid approach combining generative adversarial learning and physical modelling. It provides substantial gain in resolution and have the advantage to be quantitative in term of fluorescence intensity. The goal of this workshop is first to present the underlying physical principles of FluoGAN and then to present/test the approach on both simulated and real data. The strength of fluctuation super-resolution methods is that they do not require any fancy microscope, fluorophore or sample preparation. To demonstrate how it works, we will use standard sample slides with 2-4 stainings of fibrillar structures (i.e. microtubules) in cell cultures. Being based on a computational expensive optimisation of parameters of neural networks and physical model, the proposed approach relies on the use of GPU which we will address using Google COLAB resources and show results on ROIs. We will validate our results on Argolight calibration slide phantoms. If time/setup allow we will then test FluoGAN on “real samples”. We’ll compare SRRF algorithm (being widely diffused as a plugin in ImageJ) with our own algorithm to see advantages and inconveniences of both approaches.