Presentation of the talk entitled "Computational Creativity in the Visual Arts through Artificial Intelligence".
Explore the docs »
View Slides
·
Report Bug
·
Request Feature
Table of Contents
This repository contains the presentation of the talk entitled "Computational Creativity in the Visual Arts through Artificial Intelligence". The talk was given at the Universidad Autónoma Metropolitana (Mexico). The presentation is in Spanish.
The content of this talk is listed below:
- Introduction: presentation about me and my institution.
- AI Revolution: technological breakthroughs in the last few years that have driven the Artificial Intelligence revolution.
- GANs Networks: basic introduction to GANs (Generative Adversarial Nets) and their architecture.
- Applications of GANs: some applications of GANs and their most revolutionary architectures.
- Other architectures: architecture of the DALL-E model.
- Conclusions: conclusions of the presentation, emphasising the importance of AI in the coming years.
- References of interest: references to the scientific articles seen during the presentation.
- Licence: license under which this presentation is distributed.
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project.
- Create your Feature Branch (
git checkout -b feature/AmazingFeature). - Commit your Changes (
git commit -m 'Add some AmazingFeature'). - Push to the Branch (
git push origin feature/AmazingFeature). - Open a Pull Request.
Distributed under the Creative Commons BY NC SA License. See Creative Commons BY NC SA for more information.
Diego M. Jiménez Bravo - @dmjimenezbravo - [email protected] - [email protected]
Project Link: https://github.com/dmjimenezbravo/VisualComputationalCreativityWithAI
- Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
- Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
- Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).
- Park, T., Liu, M. Y., Wang, T. C., & Zhu, J. Y. (2019). Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2337-2346).
- Chu, M., Xie, Y., Leal-Taixé, L., & Thuerey, N. (2018). Temporally coherent gans for video super-resolution (tecogan). arXiv preprint arXiv:1811.09393, 1(2), 3.
- Zakharov, E., Shysheya, A., Burkov, E., & Lempitsky, V. (2019). Few-shot adversarial learning of realistic neural talking head models. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9459-9468).
- CLIP: Connecting Text and Images.
- Esser, P., Rombach, R., & Ommer, B. (2021). Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12873-12883).