In this work, we consider the problem of reconstructing and super-resolving images from LR events when no ground truth (GT) HR images and degradation models are available. We propose a novel end-to-end joint framework for single image reconstruction and super-resolution from LR event data. Our method is primarily unsupervised to handle the absence of real inputs from GT and deploys adversarial learning. To train our framework, we constructed an open dataset, including simulated events and real-world images. The use of the dataset boosts the network performance, and the network architectures and various loss functions in each phase help improve the quality of the resulting image. Various experiments showed that our method surpasses the state-of-the-art LR image reconstruction methods for real-world and synthetic datasets. The experiments for super-resolution (SR) image reconstruction also substantiate the effectiveness of the proposed method. We further extended our method to more challenging problems of HDR, sharp image reconstruction, and color events. In addition, we demonstrate that the reconstruction and super-resolution results serve as intermediate representations of events for high-level tasks, such as semantic segmentation, object recognition, and detection. We further examined how events affect the outputs of the three phases and analyze our method's efficacy through an ablation study.