Just wanted to let you all know that Deep Learning with Keras, a book I co-authored with Antonio Gulli, was published by PackT on April 26, 2017. For those of you who follow me on social media such as LinkedIn and Twitter, and for family and friends on Facebook, this is old news, but to others I apologize for the delay. Although if you're still reading my blog after all these years, I guess you accept (and forgive, thank you) that delays and apologies are somewhat par for the course here.
The book is targeted at the Data Scientist / Engineer staring out with Neural Networks. It contains a mix of theory and examples, but the focus is on the code, since we believe that the best way to learn something in this field is through looking at examples. All examples are in Keras, our favorite Deep Learning toolkit. By the time you are finished with the book, you should be comfortable building your own networks in Keras.
This book is also available on Amazon. If you end up reading it, do leave us a review and tell us what you liked and how we could have done better.
Yesterday, Antonio posted an image where it showed our book at #5 on Amazon. We thought initially that it was ranked by sales and were very thrilled that people like our book so much, until someone pointed out that the ranking is most likely by query relevance. Oh, well! Good feeling while it lasted though.
Today, I thought it might be interesting to share the story behind the book, and thank the people who made it possible. For those of you looking for technical content, fair warning - this post has none.
While I read a lot of books, I have never considered writing one. Like many other people in software engineering, I have switched fields multiple times, and books have been the way to gain (almost) instant expertise to help make the transition. But the authors I read were all quite accomplished, almost experts in their fields. I was neither, just a programmer who caught (and took advantage of) a few lucky breaks in his career, so end of story.
When Antonio asked me if I was interested in co-authoring a book on Deep Learning using Keras with him, I was undecided for a while. I felt that if I accepted, I was implicitly claiming expertise on subjects at which I wasn't one. On the flip side, I had been working with Deep Learning models with Caffe, Tensorflow and Keras for a while, so while I was definitely not an expert, I did have knowledge that could benefit people who were not as far in their journey as I was. That last bit convinced me that I did have some value to add to a book, so I accepted.
Once I overcame my initial hesitation about being an author, I began to see it as a new experience, one that I enjoyed thoroughly during the process of writing the chapters. Antonio wrote the first half of the book (Chapters 1-4) and I wrote the second half (Chapters 5-8) but we reviewed each others work as well before it went out for review by others. Since Antonio works for Google, he had Googlers internally review his chapters as part of their official process, and I was fortunate to have some of them review my work as well and provide valuable feedback. In addition, our technical reviewer from PackT, Nick McClure, also provided valuable suggestions. The book has benefited a great deal from the thoroughness of these reviews.
The speed at which our industry moves means that people in it have to adapt quickly as well, and I am no exception. Often, when I pick up a new technology, I spend just enough time on the theory so I can build something that works. If I don't fully understand something that isn't central to what I am building, I just "accept" it and move on. Unfortunately, this doesn't work when you are writing a book - while I have tried to limit the theory to be just enough to explain the model that I build in code, the explanation needed to be accurate and complete. For that I had to revisit some basic concepts in order to clarify them for myself, things I had neglected to do while learning about it the first time. So in a sense, writing this book actually forced me to fill gaps in my own knowledge, so I am really grateful I did it.
From an engineering standpoint, I thought PackT's publication pipeline was quite cool. I had imagined that we would provide the manuscripts electronically over email and it would go back and forth, using the built in comment mechanism supported by Microsoft Word or similar. At least that has been my experience with PackT as reviewer in the past. Instead, they now have a Content Development Platform (CDP), a CMS system (similar to Joomla or Drupal) customized to the publishing task. Authors enter their chapters into an online editor that supports code blocks, quotations, images, info boxes, etc, as well as version control. Reviewers make comments using the same interface, and the EBook and print copies are generated automatically off the updated content.
Our own process was somewhat hybrid, since we started writing before we learned about the CDP, so we started off using Google Docs, which turned out to be a good choice since it could be shared easily with Google reviewers. We ended up building all our chapters on Google docs, and then copying them over to the CDP after the Google reviews, at which point all comments and changes happened only on the CDP.
The editors from PackT were awesome to work with as well - many thanks to Divya Poojari (Acquisition editor), Cheryl Dsa (Content editor) and Dinesh Thakur (Publishing editor) for all their help guiding us through the various steps of the publishing process.
One thing that hit us towards the end, I think about a week before our originally scheduled release date, was the Keras2 upgrade. Because it was so late in the process, we debated a bit about launching as-is and providing an upgrade guide to help readers upgrade the provided code to Keras2, but in the end we decided that the right thing to do was to upgrade our code before release. This did push back the schedule a bit, but the upgrade process went relatively smoothly, thanks in large part to the very informative deprecation warnings that Keras provides.
Looking back, I am really grateful to Antonio for having confidence in my skills and offering me the opportunity to co-author the book with him. Writing the book was an extremely valuable experience for me. Quick shout-out also to two of my colleagues here at Elsevier, Ron Daniel and Bradley P Allen, both of whom have been working on Deep Learning for longer than I have, and whose experiences led me to investigate the subject further in the first place. Also, the last four months were pretty hectic, trying to balance work, the book and home, and I am grateful to my family for their patience.
Antonio and I have put in a lot of thought and effort into this book. For the explanations, we have tried to strike a balance, trying to present just enough detail to be complete yet not inundating you with math. For the code, we have tried to keep it simple enough to understand but not so simple that it ends up implementing something trivial. But all things considered, the true litmus test for the book is whether you the reader find it useful. We look forward to hearing back from you.
2 comments (moderated to prevent spam):
Hello Sir.
Can you help me in designing the Siamese network using shared weights for two different image modalitiles?
Please share the code Sir.
Hi, interested in learning more about the problem, can you please comment with more details?
Post a Comment