Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Python Deep Learning
  • Table Of Contents Toc
  • Feedback & Rating feedback
Python Deep Learning

Python Deep Learning

By : Vasilev, Daniel Slater, Spacagna, Roelants, Zocca
4 (8)
close
close
Python Deep Learning

Python Deep Learning

4 (8)
By: Vasilev, Daniel Slater, Spacagna, Roelants, Zocca

Overview of this book

With the surge in artificial intelligence in applications catering to both business and consumer needs, deep learning is more important than ever for meeting current and future market demands. With this book, you’ll explore deep learning, and learn how to put machine learning to use in your projects. This second edition of Python Deep Learning will get you up to speed with deep learning, deep neural networks, and how to train them with high-performance algorithms and popular Python frameworks. You’ll uncover different neural network architectures, such as convolutional networks, recurrent neural networks, long short-term memory (LSTM) networks, and capsule networks. You’ll also learn how to solve problems in the fields of computer vision, natural language processing (NLP), and speech recognition. You'll study generative model approaches such as variational autoencoders and Generative Adversarial Networks (GANs) to generate images. As you delve into newly evolved areas of reinforcement learning, you’ll gain an understanding of state-of-the-art algorithms that are the main components behind popular games Go, Atari, and Dota. By the end of the book, you will be well-versed with the theory of deep learning along with its real-world applications.
Table of Contents (16 chapters)
close
close
Title Page
About Packt
Contributors
Preface
Index

Variational autoencoders


To understand VAEs, let's talk about regular autoencoders first. An autoencoder is a feed-forward neural network that tries to reproduce its input. In other words, the target value (label) of an autoencoder is equal to the input data, yi = xi, where i is the sample index.We can formally say that it tries to learn an identity function,  

  (a function that repeats its input). Since our "labels" are just the input data, the autoencoder is an unsupervised algorithm. The following diagram represents an autoencoder:

An autoencoder

An autoencoder consists of an input, hidden (or bottleneck), and output layers. Although it's a single network, we can think of it as a virtual composition of two components:

  • Encoder: Maps the input data to the network's internal representation. For the sake of simplicity, in this example the encoder is a single, fully-connected hidden bottleneck layer. The internal state is just its activation vector. In general, the encoder can have multiple hidden...

Limited Time Offer

$10p/m for 3 months

Get online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech and supported with AI assistants
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon

Create a Note

Modal Close icon
You need to login to use this feature.

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note