In Part II, we will be building on these techniques to tackle several creative tasks, such as painting, writing, and composing music through models such as CycleGAN, encoder–decoder models, and MuseGAN. In addition, we shall see how generative modeling can be used to optimize playing strategy for a game (World Models) and take a look at the most cutting-edge generative architectures available today, such as StyleGAN, BigGAN, BERT, GPT-2, and MuseNet.
Prerequisites
This book assumes that you have experience coding in Python. If you are not familiar with Python, the best place to start is through LearningPython.org. There are many free resources online that will allow you to develop enough Python knowledge to work with the examples in this book.
Also, since some of the models are described using mathematical notation, it will be useful to have a solid understanding of linear algebra (for example, matrix multiplication, etc.) and general probability theory.
Finally, you will need an environment in which to run the code examples from the book’s GitHub repository. I have deliberately ensured that all of the examples in this book do not require prohibitively large amounts of computational resources to train.
There is a myth that you need a GPU in order to start training deep learning models—while this is of course helpful and will speed up training, it is not essential. In fact, if you are new to deep learning, I encourage you to first get to grips with the essentials by experimenting with small examples on your laptop, before spending money and time researching hardware to speed up training.